Module Libraries

Module Libraries

EVS modules can each be considered software applications that can be combined together by the user to form high level customized applications performing analysis and visualization. These modules have input and output ports and user interfaces.

The library of module are grouped into the following categories:

  • Estimation modules take sparse data and map it to surface and volumetric grids
  • Geology modules provide methods to create surfaces or 3D volumetric grids with lithology and stratigraphy assigned to groups of cells
  • Display modules are focused on visualization functions
  • Analysis modules provide quantification and statistical information
  • Annotation modules allow you to add axes, titles and other references to your visualizations
  • Subsetting modules extract a subset of your grids or data in order to perform boolean operations
  • Proximity modules create new data which can be used to subset or assess proximity to surfaces, areas or lines.
  • Processing modules act on your data
  • Import modules read files that contain grids, data and/or archives
  • Export modules write files that grids, data and/or archives
  • Modeling modules are focused on functionality related to simulations and vector data
  • Geometry modules create or act upon grids and geometric primitives
  • Projection modules transform grids into other coordinates or dimensionality
  • Image modules are focused on aerial photos or bitmap operations
  • Time modules provide the ability to deal with time domain data
  • Tools are a collection of modules to make life easier
  • View modules are focused on visualization and output of results
  • Legacy Module Naming

    Revisions to Module Names Effective After EVS Version 2021.10 Effective October 2021, there was a major revision to module naming. The table below lists the old and new names. Also note that the Cell Data library was eliminated with its modules moved to Processing. In general the new module names are intended to be more descriptive of each module’s functionality. For example, krig_3d_geology was named over 25 years ago when we developed it to create 3D stratigraphic models using kriging to estimate the horizons. It now does not use kriging as its default estimation method (of many) and is often used to build grids that are solely conformal to surface topography. Its new name “gridding and horizons” is far more descriptive of its current use.

  • Estimation

    3d estimation 3d estimation 3d estimation performs parameter estimation using kriging and other methods to map 3D analytical data onto volumetric grids defined by the limits of the data set, or by the convex hull, rectilinear, or finite-difference grid extents of a geologic system modeled by gridding and horizons. 3d estimation provides several convenient options for pre- and post-processing the input parameter values, and allows the user to consider anisotropy in the medium containing the property.

  • Geology

    create stratigraphic hierarchy create stratigraphic hierarchy The create stratigraphic hierarchy module reads a special input file format called a pgf file, and then allows the user to build geologic surfaces based on the input file’s geologic surface intersections. This process is carried out visually (in the EVS viewer) with the use of the create stratigraphic hierarchy user interface. The surface hierarchy can either be generated automatically for simple geology models or for every layer for complex models. When the user is finished creating surfaces the gmf file can be finalized and converted into a *.GEO file.

  • Display

    post samples post_samples The post_samples module is used to visualize: Sampling locations and the values of the properties in .apdv files The lithology specified in a .pgf, .lsdv, .lpdv or .geo files The location and values of well screens in a .aidv file Warning When using the Datamap parameters (Minimum and Maximum) unlinked such the the resulting datamap is a subset of the true data range, probing in C Tech Web Scenes will only be able to report values within the truncated data range. Values outside that limited range will display the nearest value within the truncated range.

  • Analysis

    volumetrics volumetrics The volumetrics module is used to calculate the volumes and masses of soil, and chemicals in soils and ground water, within a user specified constant_shell (surface of constant concentration), and set of geologic layers. The user inputs the units for the nodal properties, model coordinates, and the type of processing that has been applied to the nodal data values, specifies the subsetting level and soil and chemical properties to be used in the calculation, and the module performs an integration of both the soil volumes and chemical masses that are within the specified constant_shell. The results of the integration are displayed in the EVS Information Window, and in the module output window.

  • Annotation

    legend legend The legend module is used to place a legend which help correlate colors to analytical values or materials . The legend shows the relationship between the selected data component for a particular module and the colors shown in the viewer. For this reason, the legend’s RED input port must be connected to the RED output port of a module which is connected to the viewer and is generally the dominant colored object in view.

  • Subsetting

    external faces external_faces The external_faces module extracts external faces from a 2D or 3D field for rendering. external_faces produces a mesh of only the external faces of each cell set of a data set. Because each cell set’s external faces are created there may be faces that are seemingly internal (vs. external). This is especially true when external faces is used subsequent to a plume module on 3D (volumetric) input.

  • Proximity

    distance to 2d area distance to 2d area distance to 2d area receives any 3D field into its left input port and it receives triangulated polygons (from triangulate_polygon, or other sources) into its right input port. Its function is similar to buffer distance or distance to shape. It adds a data component to the input 3D field and using plume_shell, you can cut structures inside or outside of the input polygons. Only the x and y coordinates of the polygons are used because distance to 2d area cuts a projected slice that is z invariant. distance to 2d area recalculates when either input field is changed or the “Accept” button is pressed.

  • Processing

    node computation node_computation The node_computation module is used to perform mathematical operations on nodal data fields and coordinates. Data values can be used to affect coordinates (x, y, or z) and coordinates can be used to affect data values. Up to two fields can be input to node_computation. Mathematical expressions can involve one or both of the input fields**. Fields must be identical grids. This means they must have the same number of nodes and cells, otherwise the results will not make sense.**

  • Import

    read evs field read evs field read evs field reads a dataset from the primary and legacy file formats created by write evs field. .EF2: The only Lossless format for models created in 2024 and later versions .eff ASCII format, best if you want to be able to open the file in an editor or print it. For a description of the .EFF file formats click here. .efz GNU Zip compressed ASCII, same as .eff but in a zip archive .efb binary compressed format, the smallest & fastest format due to its binary form Output Quality: An important feature of read evs field is the ability to specify two separate files which correspond to High Quality (e.g. fine grids) and Low Quality (e.g. coarse grids a.k.a. fast).

  • Export

    write evs field write evs field The write evs field module creates a file in one of several formats containing the mesh and nodal and/or cell data component information sent to the input port. This module is useful for writing the output of modules which manipulate or interpolate data (3d estimation , 2d estimation, etc.) so that the data will not need to be processed in the future.

  • Sequences

    driven sequence The driven sequence module controls the semi-automatic creation of sequences for the following modules: scripted sequence The scripted sequence module provides the most power and flexibility, but requires creating a Python script which sets the states of all modules to be object sequence This is the simplest of the sequence modules, but also the easiest to abuse (vs. using scripted sequence where you can be more efficient).

  • Modeling

    3d streamlines 3d streamlines The 3d streamlines module is used to produce streamlines or stream-ribbons of a field which is a 2 or 3 element vector data component on any type of mesh. Streamlines, which are simply 3D polylines, represent the pathways particles would travel based on the gradient of the vector field. At least one of the nodal data components input to streamlines must be a vector. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. Streamlines are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

  • Geometry

    draw lines draw_lines The draw_lines module enables you to create both 2D and 3D lines interactively with the mouse. The mouse gesture for line creation is: depress the Ctrl key and then click the left mouse button on any pickable object in the viewer. The first click establishes the beginning point of the line segment and the second click establishes each successive point. polyline processing polyline processing The polyline processing module accepts a 3D polyline and can either increase or decrease the number of line segments of the polyline. A splining algorithm smooths the line trajectory once the number of points are specified. This module is useful for applications such as a fly over application (along a polyline path drawn by the user). If the user drawn line is jagged with erratically spaced line segments, polyline spline smooths the path and creates evenly spaced line segments along the path.

  • Projection

    project onto surface project onto surface project onto surface provides a mechanism to drape lines and triangles (surfaces) onto surfaces. Please note that a pseudo-3D object like a building made up of triangle faces will be flattened onto the surface. The 3D nature will not be preserved. Lines and surfaces are subsetted to match the size of the cells of the surface on which the lines are draped. In other words, draped objects will match the surface precisely.

  • Image

    overlay aerial overlay_aerial The overlay_aerial module will take as input a field and then map an image onto the horizontal areas of the grid. The image can be projected from one coordinate system to another. It can also be georeferenced if it has an accompanying All vertical surfaces (Walls) can be included in the output but will not have image data mapped to them. texture cross section texture_cross_section allows you to apply images along a complex non-linear cross section (cross-section) path and compensate for the image scale an

  • Time

    read tcf read_tcf The read_tcf module is specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations. The read_tcf module creates a field using a Time Control File (.TCF) to specify the date/time, field and corresponding data component to read (in netCDF, Field or UCD format), for each time step of a time_data field. All file types specified in the TCF file must be the same (e.g. all netCDF or all UCD). The same file can be repeated, specifying different data components to represent different time steps of the output.

  • Tools

    group objects group objects group objects is a renderable object that contains other subobjects that have the attributes that control how the rendering is done. Unlike DataObject, group objects does not include data. Instead, it is meant to be a node in the rendering hierarchy that groups other DataObjects together and supplies common attributes from them. This object is connected directly to one of the viewers (for example, Simpleviewer3D) or to another DataObject or to group objects. A group objects is included in all the standard viewers provided with the EVS applications chooses.

  • View

    viewer viewer The viewer accepts renderable objects from all modules with red output ports to include their output in the view. Module Input Ports Objects [Renderable]: Receives renderable objects from any number of modules Module Output Ports View [View / minor] Outputs the view information used by other modules to provide all model extents or interactivity viewer Properties: The user interfaces for the viewer are arranged in 10 categories which cover interaction with the scene, the characteristics of the viewer as well as various output options.

  • Deprecated

    scat_to_unif scat_to_unif The scat_to_unif module is used to convert scattered sample data into a three-dimensional uniform field. Also, scat_to_unif can be used to take an existing grid (for example a UCD file) and convert it to a uniform field. scat_to_unif converts a field of non-uniformly spaced points into a uniform field which can be used with many of EVS’s filter and mapper modules. “Scattered sample data " means that there are disconnected nodes in space. An example would be geology or analyte (e.g. chemistry) data where the coordinates are the x, y, and elevation of a measured parameter. The data is “scattered” because there isn’t data for every x/y/elevation of interest.

Subsections of Module Libraries

Revisions to Module Names Effective After EVS Version 2021.10

Effective October 2021, there was a major revision to module naming. The table below lists the old and new names. Also note that the Cell Data library was eliminated with its modules moved to Processing.

In general the new module names are intended to be more descriptive of each module’s functionality. For example, krig_3d_geology was named over 25 years ago when we developed it to create 3D stratigraphic models using kriging to estimate the horizons. It now does not use kriging as its default estimation method (of many) and is often used to build grids that are solely conformal to surface topography. Its new name “gridding and horizons” is far more descriptive of its current use.

Also we have striven to be consistent in the naming of input and output modules. If they read or write EVS proprietary formats, their naming begins with read or write. If they read or write external formats (GIS, CAD, industry standards, images, etc.) their names begin with import or export.

Old CategoryOld Module NameNew LibraryNew Module Name
Estimationkrig_2dEstimation2d estimation
Estimationkrig_3dEstimation3d estimation
Estimationkrig_3d_geologyEstimationgridding and horizons
Estimationindicator_realizationEstimationlithologic realization
Geologymake_geo_hierarchyGeologycreate stratigraphic hierarchy
Geology3d_geology_mapGeologyhorizons to 3d
Geologygeology_to_structuredGeologyhorizons to 3d structured
Geologylayer_from_surfaceGeologylayer from horizon
Geologygeologic_surfaceGeologysurface from horizons
Geologygeologic_surfacesGeologysurfaces from horizons
Geologyindicator_geologyGeologylithologic modeling
Geologycombine_geologyGeologycombine horizons
Geologysubset_layersGeologysubset horizons
Geologymake_single_layerGeologycollapse horizons
Displaycontour_dataDisplayband data
Displayadjust_opacityDisplayopacity by nodal data
Displayselect_dataDisplayselect single data
Displayread_wavefrtont_obfDisplayimport wavefront obj
Analysisarea_integrateAnalysiscompute surface area
AnnotationnorthAnnotationdirection indicator
Subsettingthin_fenceSubsettingcross section
Subsettingplume_cellSubsettingsubset by expression
Subsettingselect_cellsSubsettingselect cell sets
Proximityarea_cutProximitydistance to 2d area
Proximitysurf_cutProximitydistance to surface
Proximityshape_cutProximitydistance to shape
ProximitybufferProximitybuffer distance
Proximitytunnel_cutProximitydistance to tunnel center
Proximitymask_geologyGeologymask horizons
Processingcombine_componentsProcessingcombine nodal data
Processinginterp_dataProcessinginterpolate nodal data
ProcessingthicknessProcessingcompute thickness
Processingdata_translateProcessingtranslate by data
Importload_evs_fieldImportread evs field
Importread_vtkImportimport vtk
Importread_cadImportimport cad
Importread_vector_gisImportimport vector gis
Importraster_to_geologyImportimport raster as horizon
Importstrike_and_dipImportread strike and dip
Importload_glyphImportread glyph
ImportsymbolsImportread symbols
Importread_geometryImportimport geometry
Exportsave_evs_fieldExportwrite evs field
Exportwrite_coordinatesExportexport nodes
Exportwrite_cadExportexport cad
Exportwrite_vector_gisExportexport vector gis
Exportgeology_to_rasterExportexport horizon to raster
Exportgeology_to_vistasExportexport horizons to vistas
ModelingstreamlinesModeling3d streamlines
Modelingstreamline_surfaceModelingsurface streamlines
Modelingdrill_pathModelingcreate drill path
Modelingcombine_vectModelingscalars to vector
ModelingmagnitudeModelingvector magnitude
Geometrypolyline_splineGeometrypolyline processing
Geometrytri_toolGeometrytriangle refinement
GeometryglyphGeometryglyphs at nodes
ProjectionsurfmapProjectionproject onto surface
Projectiontransform_groupProjectiontransform objects
Imageload_eftImageread eft
Imagetexture_geologyImagetexture cell sets
Imagegeoreferenced_outputImageexport georeferenced image
Timetime_geologyTimetime horizon
Toolsgroup_objectToolsgroup objects
Tools2d_overlay_groupToolsgroup objects to 2d overlay
Toolsscat_to_tinToolscreate tin
Cell Datacell_computationProcessingcell computation
Cell Datacell_to_nodeProcessingcell data to node data
Cell Datanode_to_cellProcessingnode data to cell data
Cell Datainterp_cell_dataProcessinginterpolate cell data
Cell Datashrink_cellsProcessingshrink cells
Cell Datacell_centersProcessingcell centers
  • 3d estimation

    3d estimation 3d estimation performs parameter estimation using kriging and other methods to map 3D analytical data onto volumetric grids defined by the limits of the data set, or by the convex hull, rectilinear, or finite-difference grid extents of a geologic system modeled by gridding and horizons. 3d estimation provides several convenient options for pre- and post-processing the input parameter values, and allows the user to consider anisotropy in the medium containing the property.

  • 2d estimation

    2d estimation 2d estimation performs parameter estimation using kriging and other methods to map 2D analytical data onto surface grids defined by the limits of the data set as rectilinear or convex hull extents of the input data. Its Adaptive Griddingfurther subdivides individual elements to place a “kriged” node at the location of each input data sample. This guarantees that the output will accurately reflect the input at all measured locations (i.e. the maximum in the output will be the maximum of the input).

  • gridding and horizons

    gridding and horizons The gridding and horizons module uses data files containing geologic horizons or surfaces (usually .geo, .gmf and other ctech formats containing surfaces) to model the surfaces bounding geologic layers that will provide the framework for three-dimensional geologic modeling and parameter estimation. Conversion of scattered points to surfaces uses kriging (default) or spline (previously in the spline_geology module), IDW or nearest neighbor algorithms.

  • analytical realization

    analytical realization The analytical realization module is one of three similar modules (the other two are lithologic realization and stratigraphic_realization), which allows you to very quickly generate statistical realizations of your 2D and 3D kriged models based upon C Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS) technology, which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • stratigraphic realization

    stratigraphic realization The stratigraphic realization module is one of three similar modules (the other two are analytical_realization and lithologic realization), which allows you to very quickly generate statistical realizations of your stratigraphic horizons based uponC Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS), which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • lithologic realization

    lithologic realization The lithologic realization module is one of three similar modules (the other two are analytical_realization and stratigraphic_realization), which allows you to very quickly generate statistical realizations of your 2D and 3D lithologic models based upon C Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS), which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • lithologic assessment

    Lithologic assessment provides a way to determine the quality of a lithologic model on an individual material basis. The concept and procedure to do t

  • external_kriging

    external_kriging The external_kriging module allows users to perform estimation using grids created in EVS (with our without layers or lithology) in GeoEAS which supports very advanced variography and kriging techniques. Grids and data are kriged externally from EVS and the results can then be read into EVS and treated as if they were kriged in EVS.

Subsections of Estimation

3d estimation

3d estimation performs parameter estimation using kriging and other methods to map 3D analytical data onto volumetric grids defined by the limits of the data set, or by the convex hull, rectilinear, or finite-difference grid extents of a geologic system modeled by gridding and horizons. 3d estimation provides several convenient options for pre- and post-processing the input parameter values, and allows the user to consider anisotropy in the medium containing the property.

3d estimation also has the ability to create uniform fields, and the ability to choose which data components you want to include in the output. There are a couple significant requirements for uniform fields. First, there cannot be geologic input (otherwise the cells could not be rectangular blocks). Second, Adaptive_Gridding must be turned off (otherwise the connectivity is not implicit).

Module Input Ports

  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.
  • Input External Grid [Field / minor] Allows the user to import a previously created grid. All data will be kriged to this grid.
  • Input External Data [Field / minor] Allows the user to import a field contain data. This data will be kriged to the grid instead of using file data.

Module Output Ports

  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.
  • Status Information [String / minor] Outputs a string containing module parameters. This is useful for connection to write evs field to document the settings used to create a grid.
  • Uncertainty Sphere [Renderable / minor] Outputs a sphere to the viewer. This sphere represents the location of maximum uncertainty.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Grid Settings: control the grid type, position and resolution
  • Data Processing: controls clipping, processing (Log) and clamping of input data and kriged outputs.
  • Time Settings: controls how the module deals with time domain data
  • Krig Settings: control the estimation methods
  • Data To Export: specify which data is included in the output
  • Display Settings: applies to maximum uncertainty sphere
  • Drill Guide: parameters association with DrillGuide computations for analytically guided site assessment

Variogram Options:

There are three variogram options:

  1. Spherical: Our default and recommended choice for most applications
  2. Exponential: Generally gives similar results to Spherical and may be superior for some datasets
  3. Gaussian: Notoriously unstable, but can “smooth” your data with an appropriate nugget.

I specifically want to discuss the pros and cons of Gaussian. Without a nugget term, Gaussian is generally unusable. When using Autofit, our expert system will apply a modest nugget (~1% of sill) to maintain stability. If you’re committed to experimenting with Gaussian, it is recommended that you experiment with the nugget term after EVS computes the Range and Sill. Below are some things to look for:

  • If you find that Gaussian kriging is overshooting the plume in various directions, your nugget is likely too small.
  • However, if the plume looks overly smooth and is too far from honoring your data, your nugget is likely too big.

The “Power Factor” is only used for exponential or gaussian variograms. The default value of 3 is the most common value used for exponential in most software. For gaussian, 2 is most common, though anything from 0.1->3 is typically acceptable. This is effectively the “a” term described here: https://en.wikipedia.org/wiki/Variogram#Variogram_models

Advanced Variography Options:

It is far beyond the scope of our Help documentation to include an advanced Geostatistics course. The terminology and variogram plotting style that we use is industry standard and we do so because we will not provide detailed technical support nor complete documentation on these features, which would effectively require a geostatistics textbook, in our help.

However, there is an Advanced Training Video on how to take advantage of the complex, directional anisotropic variography capabilities in 3d estimation (which applies equally well to lithologic modeling). This class is focused on the mechanics of how to employ and refine the variogram anisotropy with respect to your data and the physics of your project such as contaminated sediments in a river bottom. The variogram is displayed as an ellipsoid which can be distorted to represent the Primary and Secondary anisotropies and rotated to represent the Heading, Dip and Roll. Overall scale and translation are also provided as additional visual aids to compare the variogram to the data, though these do not affect the actual variogram.

We are not hiding this capability from you as the Anisotropic Variography Study folder of Earth Volumetric Studio Projects contains a number of sample applications which demonstrate exactly what is described above. However, we assure you that understanding how to apply this to your own projects will be quite daunting and really does require a number of prerequisites:

  • A thorough explanation of these complex applications
  • An understanding of all of the variogram parameters and their impact on the estimation process on both theoretical datasets as well as real-world datasets.

This 3 hour course addresses these issues in detail.

2d estimation

2d estimation performs parameter estimation using kriging and other methods to map 2D analytical data onto surface grids defined by the limits of the data set as rectilinear or convex hull extents of the input data.

Its Adaptive Griddingfurther subdivides individual elements to place a “kriged” node at the location of each input data sample. This guarantees that the output will accurately reflect the input at all measured locations (i.e. the maximum in the output will be the maximum of the input).

The DrillGuide functionality produces a new input data file with a synthetic boring at the location of maximum uncertainty calculated from the previous kriging estimates, which can then be rerun to find the next area of highest uncertainty. The naming of the “DrillGuide©” file which is created when 2d estimation is run with all types of analyte (e.g. chemistry) files ends in apdv1, apdv2, apdv3, etc. the output file name will be .apdv2, apdv3, apdv4…. There are no limits to the number of cycles that may be run.

The use of 2d estimation to perform analytically guided site assessment is covered in detail in Workbook 2: DrillGuide© Analytically Guided Site Assessment.

This process can be continued as many times as desired to define the number and placement of additional borings that are needed to reduce the maximum uncertainty in the modeled domain to a user specified level. The features of 2d estimation make it particularly useful for optimizing the benefits obtained from environmental sampling or ore drilling programs. 2d estimation also provides some special data processing options that are unique to it, which allow it to extract 2-dimensional data sets from input data files that contain three-dimensional data. This functionality allows it to use the same .apdv files as all of the other EVS input and kriging modules, and allows detailed analyses of property characteristics along 2-dimensional planes through the data set. 2d estimation also provides the user with options to magnify or distort the resulting grid by the kriged value of the property at each grid node. 2d estimation also allows the user to automatically clamp the data distribution to a specified level along a boundary that can be offset from the convex hull of the data domain by a user defined amount.

Module Input Ports

  • Input External Grid [Field / minor] Allows the user to import a previously created grid. All data will be kriged to this grid.
  • Input External Data [Field / minor] Allows the user to import a field contain data. This data will be kriged to the grid instead of using file data.
  • Filename [String / minor] Allows the sharing of file names between similar modules.

Module Output Ports

  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules which have the same color port
  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Status Information [String / minor] Outputs a string containing module parameters. This is useful for connection to write evs field to document the settings used to create a grid.
  • Surface [Renderable] Outputs the kriged surface to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Grid Settings: control the grid type, position and resolution
  • Data Processing: controls clipping, processing (Log) and clamping of input data and kriged outputs.
  • Time Settings: controls how the module deals with time domain data
  • Krig Settings: control the estimation methods
  • Data To Export: specify which data is included in the output
  • Display Settings: applies to maximum uncertainty sphere
  • Drill Guide: parameters association with DrillGuide computations for analytically guided site assessment

Variogram Options:

There are three variogram options:

  1. Spherical: Our default and recommended choice for most applications
  2. Exponential: Generally gives similar results to Spherical and may be superior for some datasets
  3. Gaussian: Notoriously unstable, but can “smooth” your data with an appropriate nugget.

I specifically want to discuss the pros and cons of Gaussian. Without a nugget term, Gaussian is generally unusable. When using Autofit, our expert system will apply a modest nugget (~1% of sill) to maintain stability. If you’re committed to experimenting with Gaussian, it is recommended that you experiment with the nugget term after EVS computes the Range and Sill. Below are some things to look for:

  • If you find that Gaussian kriging is overshooting the plume in various directions, your nugget is likely too small.
  • However, if the plume looks overly smooth and is too far from honoring your data, your nugget is likely too big.

gridding and horizons

The gridding and horizons module uses data files containing geologic horizons or surfaces (usually .geo, .gmf and other ctech formats containing surfaces) to model the surfaces bounding geologic layers that will provide the framework for three-dimensional geologic modeling and parameter estimation. Conversion of scattered points to surfaces uses kriging (default) or spline (previously in the spline_geology module), IDW or nearest neighbor algorithms.

gridding and horizons creates a 2D grid containing one or more elevations at each node. Each elevation represents a geologic surface at that point in space. The output of gridding and horizons is a data field that can be sent to several modules (e.g. 3d estimation, horizons to 3d, horizons_to_3d_structured, surfaces from horizons, etc.)

Those modules which create volumetric models convert the quadrilateral elements into layers of hexahedral (8-node brick) elements. The output of gridding and horizons can also be sent to the surface from horizons(s) module(s) which allow visualization of the individual layers of quadrilateral elements (the surfaces) that comprise the surfaces.

gridding and horizons has the capability to produce layer surfaces within the convex hull of the data domain, within a rectilinear domain with equally spaced nodes, or within a rectilinear domain with specified cell sizes such as a finite-difference model grid. The finite-difference gridding capabilities allows the user to visually design a grid with variable spacing, and then krige the geologic layer elevations directly to the finite difference grid nodes. gridding and horizons also provides geologic surface definitions to the post_samples module to allow exploding of boreholes and samples by geologic layer.

Note: gridding and horizons has the ability to read .apdv, .aidv and .pgf file to create a single geologic layer model. This was not done as a preferred alternative to creating/representing your valid site geology. However, most sites have some ground surface topography variation. If 3d estimation is used without geology input, the resulting output will have flat top and bottom surfaces. The flat top surface may be below or above the actual ground surface at various locations. This can result in plume volumes that are inaccurate.

When a .apdv or .pgf is read by gridding and horizons the files are interpreted as geology as follows:

  1. If Top of boring elevations are provided in the file, these values are used to create the ground surface.

  2. If Top of boring elevations are not provide in the file, the elevations of the highest sample in each boring are used to create the ground surface.

  3. The bottom surface is created as a flat surface slightly below the lowest sample in the file. The elevation of the surface is computed by taking the lowest sample and subtracting 5% of the total z-extent of the samples.

When reading these files, you will get a single layer which goes to either the Top column (if it exists) otherwise, the top sample in each boring, and 5% below the lowest sample in the file (flat bottom). This allows you to create a convex hull around data without having geology info. It also provide a topographic top surfaces if your analyte (e.g. chemistry) or PGF file has Tops (grounds surface elevations). Also nice for doing indicator kriging (since a single, well-defined pgf can give you an entire indicator model now). Be aware that if Top is specified, but all values are exactly 0.0, the top sample elevation for each boring will be used.

Module Input Ports

  • Filename [String / minor] Receives the filename from other modules.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Geologic Field [Field] Can be connected to the 3d estimation, 3D_Geology Map, and surface from horizons(s) modules.
  • Filename [String / minor] Outputs a string containing the file name and path. This can be connected to other modules to share files.
  • Status Information [String / minor] Outputs a string containing module parameters. This is useful for connection to write evs field to document the settings used to create a grid.
  • Geology Export Output [Vistas Data / minor] Provides input to the export horizons to vistas and other modules which create raster output.
  • Grid [Renderable / minor] Outputs the geometry of 2D grid.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Grid Settings: control the grid type, position and resolution
  • Krig Settings: control the estimation methods
  • Computational Settings: define computational surfaces included in the output. This allows a single surface file to define a layer specified by elevation or depth.

analytical realization

The analytical realization module is one of three similar modules (the other two are lithologic realization and stratigraphic_realization), which allows you to very quickly generate statistical realizations of your 2D and 3D kriged models based upon C Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS) technology, which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • Create realizations very rapidly
  • Exercise greater control over the frequency and magnitude of noise typical in GGS.
  • Control deviation magnitudes from the nominal kriged prediction based on a Min Max Confidence Equivalent.
    • Deviations are the absolute value of the changes to the analytical prediction (in user units)
  • Apply Simple or Advanced Anisotropy control over 2D or 3D wavelengths

C Tech’s FGR^®^ creates more plausible cases (realizations) which allow the Nominal concentrations to deviate from the peak of the bell curve (equal probability of being an under-prediction as an over-prediction) by the same user defined Confidence. However, FGR allows the deviations to be both positive (max) and negative (min), and to fluctuate in a more realistic randomized manner.

Module Input Ports

  • Realization [Special Field] Accepts outputs from 3d estimation and krig_2d to allow for EGGS models to be created

Module Output Ports

  • Output Field [Field] Outputs the subsetting level
  • Deviations Field [Field] Outputs the deviations from the nominal kriged model

Important Parameters

There are several parameters which affect the realizations. A brief description of each follows:

  • Randomness Generator Type
    • There are four types, each of which create a different 2D/3D random distribution
  • Anisotropy Mode
    • Two options here are Simple or Advanced. These are equivalent to the variogram settings in 3d estimation or krig_2d
  • Seed
    • The “Seed” is used in the random number generator, and makes it reproducible.
    • Unique seeds create unique realizations
  • Wavelength
    • The 2D or 3D random distribution is governed by a mean wavelength that determines the apparent frequency of deviations from the nominal kriged result.
    • Wavelength is in your project coordinates (e.g. meters or feet)
    • Longer wavelengths create smoother realizations
    • Shorter wavelengths create more “noisy” variations in the realizations
    • Very short wavelengths will give results more similar to GGS (aka Sequential Gaussian Simulations)
  • Min Max Confidence Equivalent
    • This parameter determines the magnitude of the deviations.
    • Values close to 50% result in outputs that deviate very little from the nominal kriged result.
      • (we do not allow values below 51% for algorithm stability reasons)
    • Values at or approaching 99.99% will result in the greatest (4 sigma) variations (more similar to GGS)

stratigraphic realization

The stratigraphic realization module is one of three similar modules (the other two are analytical_realization and lithologic realization), which allows you to very quickly generate statistical realizations of your stratigraphic horizons based uponC Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS), which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • Create realizations rapidly
  • Exercise greater control over the frequency and magnitude of noise typical in GGS.
  • Control deviation magnitudes from the nominal kriged prediction based on a Min Max Confidence Equivalent.
    • Deviations are the absolute value of the changes to surface elevations for each stratigraphic horizon.
  • Apply Simple or Advanced Anisotropy control over 2D wavelengths
  • For stratigraphic realizations only: we support Natural Neighbor as well as kriging for the input model.

Module Input Ports

  • Realization [Special Field] Accepts outputs from gridding and horizons to allow for FGR^®^ models to be created

Module Output Ports

  • Output Field [Field] Outputs the subsetting level
  • Deviations Field [Field] Outputs the deviations from the nominal kriged model

Important Parameters

There are several parameters which affect the realizations. A brief description of each follows:

  • Randomness Generator Type
    • There are four types, each of which create a different 2D/3D random distribution
  • Anisotropy Mode
    • Two options here are Simple or Advanced. These are equivalent to the variogram settings in gridding and horizons
  • Seed
    • The “Seed” is used in the random number generator, and makes it reproducible.
    • Unique seeds create unique realizations
  • Wavelength
    • The 2D or 3D random distribution is governed by a mean wavelength that determines the apparent frequency of deviations from the nominal kriged (or Natural Neighbor) result.
    • Wavelength is in your project coordinates (e.g. meters or feet)
    • Longer wavelengths create smoother realizations
    • Shorter wavelengths create more “noisy” variations in the realizations
    • Very short wavelengths will give results more similar to GGS (aka Sequential Gaussian Simulations)
  • Min Max Confidence Equivalent
    • This parameter determines the magnitude of the deviations.
    • Values close to 50% result in outputs that deviate very little from the nominal kriged (or Natural Neighbor) result.
      • (we do not allow values below 51% for algorithm stability reasons)
    • Values at or approaching 99.99% will result in the greatest (4 sigma) variations (more similar to GGS)

lithologic realization

The lithologic realization module is one of three similar modules (the other two are analytical_realization and stratigraphic_realization), which allows you to very quickly generate statistical realizations of your 2D and 3D lithologic models based upon C Tech’s Proprietary Extended Gaussian Geostatistical Simulation (GGS), which we refer to as Fast Geostatistical Realizations^®^ or FGR^®^. Our extensions to GGS allow you to:

  • Create realizations rapidly:
    • Though indicator_realizations are the slowest of the three because:
    • The material probabilities must be additionally processed to assign materials
    • When Smooth option is on, this process often takes nearly as long as the original kriging
  • Exercise greater control over the frequency and magnitude of visual noise typical of GGS.
  • Control deviation magnitudes from the nominal kriged probability prediction based on a Min Max Confidence Equivalent.
    • Deviations are the absolute value of the changes to each material’s probability
  • Apply Simple or Advanced Anisotropy control over 2D or 3D wavelengths

Module Input Ports

  • Realization [Special Field] Accepts outputs from 3d estimation and krig_2d to allow for EGGS models to be created

Module Output Ports

  • Output Field [Field] Outputs the subsetting level
  • Deviations Field [Field] Outputs the deviations from the nominal kriged probabilities

Important Parameters

There are several parameters which affect the realizations. A brief description of each follows:

  • Randomness Generator Type
    • There are four types, each of which create a different 2D/3D random distribution
  • Anisotropy Mode
    • Two options here are Simple or Advanced. These are equivalent to the variogram settings in lithologic modeling
  • Seed
    • The “Seed” is used in the random number generator, and makes it reproducible.
    • Unique seeds create unique realizations
  • Wavelength
    • The 2D or 3D random distribution is governed by a mean wavelength that determines the apparent frequency of deviations from the nominal kriged probabilities results.
    • Wavelength is in your project coordinates (e.g. meters or feet)
    • Longer wavelengths create smoother realizations
    • Shorter wavelengths create more “noisy” variations in the realizations
    • Very short wavelengths will give results more similar to GGS (aka Sequential Gaussian Simulations)
  • Min Max Confidence Equivalent
    • This parameter determines the magnitude of the deviations.
    • Values close to 50% result in outputs that deviate very little from the nominal kriged probabilities results.
      • (we do not allow values below 51% for algorithm stability reasons)
    • Values at or approaching 99.99% will result in the greatest (4 sigma) variations (more similar to GGS)

Lithologic assessment provides a way to determine the quality of a lithologic model on an individual material basis. The concept and procedure to do this is:

  • Select the material to be assessed (Basalt shown below)

  • Choose a Min Max Confidence Equivalent value (95% shown below)

    • A 50% confidence will result in the Min or Max being equal to the nominal model
    • High confidence values (90+%) will show greater difference from nominal
  • Select the Direction (Max or Min)

  • Choose the data to be exported

Module Input Ports

  • Realization [Special Field] Accepts output from lithologic modeling

Module Output Ports

  • Output Field [Field] Outputs the subsetting level
  • Deviations Field [Field] Outputs the deviations from the nominal kriged probabilities

external_kriging

The external_kriging module allows users to perform estimation using grids created in EVS (with our without layers or lithology) in GeoEAS which supports very advanced variography and kriging techniques. Grids and data are kriged externally from EVS and the results can then be read into EVS and treated as if they were kriged in EVS.

This an advanced module which should be used only by persons with experience with GeoEAS and geostatistics. C Tech does not provide tech support for the use of GeoEAS.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Input Data [Field] Allows the user to import a field contain data. This data will be kriged to the grid instead of using file data.
  • Input Grid [Field] Allows the user to import a previously created grid. All data will be kriged to this grid.

Module Output Ports

  • Output [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: defines Z Scale and grid translation(s)
  • Export Data: controls the file names and data processing for creation of GeoEAS inputs.
  • Export Grid: Exports the grid and data to GeoEAS formats. A grid and data must be connected to the import ports
  • Import Data: Imports the grid and data to GeoEAS formats. A grid and data must be connected to the import ports
  • create stratigraphic hierarchy

    create stratigraphic hierarchy The create stratigraphic hierarchy module reads a special input file format called a pgf file, and then allows the user to build geologic surfaces based on the input file’s geologic surface intersections. This process is carried out visually (in the EVS viewer) with the use of the create stratigraphic hierarchy user interface. The surface hierarchy can either be generated automatically for simple geology models or for every layer for complex models. When the user is finished creating surfaces the gmf file can be finalized and converted into a *.GEO file.

  • horizons to 3d

    horizons to 3d The horizons to 3d module creates 3-dimensional solid layers from the 2-dimensional surfaces produced by gridding and horizons, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom surfaces of each geologic layer.

  • horizons to 3d structured

    horizons_to_3d_structured The horizons_to_3d_structured module creates 3-dimensional solid layers from the 2-dimensional surfaces produced by gridding and horizons, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom surfaces of each geologic layer. This module is similar to horizons to 3d, but does not duplicate nodes at the layer boundaries and therefore the model it creates cannot be exploded into individual layers. However, this module has the advantage that its output is substantially more memory efficient and can be used with modules like crop_and_downsize or ortho_slice.

  • layer from horizon

    layer from horizon The layer from horizon module will create a single geo layer based upon an existing surface and a constant elevation value. The Surface Defines option will allow the user to set whether the selected surface defines the top or the bottom of the layer. For example if the Top Of Layer is chosen the selected surface will define the top, while the Constant Elevation for Layer will define the bottom of the layer. The ‘Material Name / Number’ will define the geologic layer name and number for the newly created layer.

  • surface from horizons

    surface from horizons This module allows visualization of the topology of any single surface. surface from horizons can explode the geologic surface analogous to how explode_and_scale explodes layers created by horizons to 3d or 3d estimation. The ability to explode the surface is integral to this module. surface from horizons also allows the user to either color the surface according to the surface Elevation or any other data component exported by gridding and horizons.

  • surfaces from horizons

    surfaces from horizons The surfaces from horizons module provides complete control of displaying, scaling and exploding one or more geologic surfaces from the set of surfaces output by gridding and horizons. This module allows visualization of the topology of any or all surfaces and\or the interaction of a set of individual surfaces. surfaces from horizons can explode geologic surfaces analogous to how explode_and_scale explodes layers created by horizons to 3d or 3d estimation. The ability to explode the surfaces is integral to this module.

  • lithologic modeling

    lithologic modeling lithologic modeling is an alternative geologic modeling concept that uses geostatistics to assign each cell’s lithologic material as defined in a pregeology (.pgf) file, to cells in a 3D volumetric grid. There are two Estimation Types: Nearest Neighbor is a quick method that merely finds the nearest lithology sample interval among all of your data and assigns that material. It is very fast, but generally should not be used for your final work. Kriging provides the rigorous probabilistic approach to geologic indicator kriging. The probability for each material is computed for each cell center of your grid. The material with the highest probability is assigned to the cell. All of the individual material probabilities are provided as additional cell data components. This will allow you to identify regions where the material assignment is somewhat ambiguous. Needless to say, this approach is much slower (especially with many materials), but often yields superior results and interesting insights. There are also two Lithology Methods when Kriging is selected.

  • mask horizons

    mask horizons mask horizons receives geologic input into its left input port and an optional input masking surface into its right port. Module Input Ports Input Field [Field] Accepts a data field. Input Area [Field] Accepts a field defining a surface of the area for masking Module Output Ports Output Field [Field] Outputs the processed field. NOTE: The mask is normally applied to the first surface only. If this surface is removed, the mask is lost. However the “Allow Subsetting” toggle will apply the mask to all horizons, but it will slow down processing and use more memory.

  • edit horizons

    edit_horizons is an interactive module which allows you to probe points to be selectively added to the creation of each and every stratigraphic horiz

  • horizon ranking

    horizon_ranking The horizon_ranking module is used to give the user control over individual surface priorities and rankings. This allows the user to fine tune their hierarchy in ways much more complex than a simple top-down or bottom-up approach. Module Input Ports horizon_ranking has one input port which receives geologic input from modules like gridding and horizons

  • material mapping

    material_mapping This module can re-assign data corresponding to: Geologic Layer Material ID Indicator Adaptive Indicator for the purpose of grouping. This provides great flexibility for exploding models or coloring. Groups are processed from Top to Bottom. You can have overlapping groups or groups whose range falls inside a previous group. In that event, the lower groups override the values mapped in a higher group.

  • combine horizons

    combine horizons The combine horizons module is used to merge up to six geologic horizons (surfaces) to create a field representing multiple geologic layers. The mesh (x-y coordinates) from the first input field, will be the mesh in the output. The input fields should have the same scale and origin, and number of nodes in order for the output data to have any meaning.

  • subset horizons

    subset horizons The subset horizons module allows you to subset the output of gridding and horizons so that downstream modules (3d estimation, horizons to 3d, Geologic Surface) act on only a portion of the layers kriged. subset horizons is used to select a subset of the layers (and corresponding surfaces) export from gridding and horizons. This is useful if you want (need) to krige parameter data in each geologic layer separately.

  • collapse horizons

    collapse horizons The collapse horizons module allows you to subset the output of gridding and horizons so that downstream modules (3d estimation, horizons to 3d, Geologic Surface) act on only a single merged layer. collapse horizons is used to merge all layers (and corresponding surfaces) export from gridding and horizons into a single layer (topmost and bottommost surfaces).

  • displace block

    displace_block displace_block receives any 3D field into its input port and outputs the same field translated in z according to a selected nodal data component of an input surface allowing for non-uniform fault block translation. This module allows for the creation of tear faults and other complex geologic structures. Used in conjunction with distance to surface it makes it possible to easily model extremely complex deformations.

Subsections of Geology

create stratigraphic hierarchy

The create stratigraphic hierarchy module reads a special input file format called a pgf file, and then allows the user to build geologic surfaces based on the input file’s geologic surface intersections. This process is carried out visually (in the EVS viewer) with the use of the create stratigraphic hierarchy user interface. The surface hierarchy can either be generated automatically for simple geology models or for every layer for complex models. When the user is finished creating surfaces the gmf file can be finalized and converted into a *.GEO file.

Boring States:

  • Preserve Bottom tells the module that when the TIN has reached the bottom of a boring don’t drop it from the geology but continue to add the same point to the remaining surfaces.

    • The Preserved state is automatically applied when the Preserve Bottom toggle is on, and you reach the bottom of the boring.
  • To Be Dropped is just for your information (this is not a state that you can set). When the tin continues below a boring that boring gets dropped from the remainder of the surfaces.

  • Boring Dropped is a way of removing a boring from the geology for the current surface and below, this will happen automatically when the TIN reaches the bottom of a boring but can be done at any point by changing this state.

horizons to 3d

The horizons to 3d module creates 3-dimensional solid layers from the 2-dimensional surfaces produced by gridding and horizons, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom surfaces of each geologic layer.

The number of nodes specified for the Z Resolution may be distributed (proportionately) over the geologic layers in a manner that is approximately proportional to the fractional thickness of each layer relative to the total thickness of the geologic domain. In this case, at least three layers of nodes (2 layers of elements) will be placed in each geologic layer.

Please note that if any portions of the input geology is NULL, these cells will be omitted from the grid that is created. This can save memory and provide a means to cut (in a Lego fashion) along boundaries.

Module Input Ports

  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls Z Scale and Explode distance
  • Layer Settings: resolution and layer settings
  • Data To Export: controls what data to outputs.

horizons_to_3d_structured

The horizons_to_3d_structured module creates 3-dimensional solid layers from the 2-dimensional surfaces produced by gridding and horizons, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom surfaces of each geologic layer.

This module is similar to horizons to 3d, but does not duplicate nodes at the layer boundaries and therefore the model it creates cannot be exploded into individual layers. However, this module has the advantage that its output is substantially more memory efficient and can be used with modules like crop_and_downsize or ortho_slice.

The number of nodes specified for the Z Resolution may be distributed (proportionately) over the geologic layers in a manner that is approximately proportional to the fractional thickness of each layer relative to the total thickness of the geologic domain.

Module Input Ports

  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls Z Scale and Explode distance
  • Layer Settings: resolution and layer settings
  • Data To Export: controls what data to outputs.

layer from horizon

The layer from horizon module will create a single geo layer based upon an existing surface and a constant elevation value.

The Surface Defines option will allow the user to set whether the selected surface defines the top or the bottom of the layer. For example if the Top Of Layer is chosen the selected surface will define the top, while the Constant Elevation for Layer will define the bottom of the layer. The ‘Material Name / Number’ will define the geologic layer name and number for the newly created layer.

surface from horizons

This module allows visualization of the topology of any single surface.

surface from horizons can explode the geologic surface analogous to how explode_and_scale explodes layers created by horizons to 3d or 3d estimation. The ability to explode the surface is integral to this module.

surface from horizons also allows the user to either color the surface according to the surface Elevation or any other data component exported by gridding and horizons.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Explode [Number] Accepts the Explode distance from other modules
  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Explode [Number] Outputs the Explode distance to other modules
  • Surface Name [String / minor] Outputs a string containing the selected surface’s name
  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.
  • Surface [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls Z Scale and Explode distance
  • Surface Settings: controls translation, hierarchy and surface selection
  • Data Settings: controls clipping, processing (Log) and clamping of input data and kriged outputs.

surfaces from horizons

The surfaces from horizons module provides complete control of displaying, scaling and exploding one or more geologic surfaces from the set of surfaces output by gridding and horizons. This module allows visualization of the topology of any or all surfaces and\or the interaction of a set of individual surfaces.

surfaces from horizons can explode geologic surfaces analogous to how explode_and_scale explodes layers created by horizons to 3d or 3d estimation. The ability to explode the surfaces is integral to this module.

surfaces from horizons also allows the user to either color the surface according to the surface Elevation or any other data component exported by gridding and horizons.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Explode [Number] Accepts the Explode distance from other modules
  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Explode [Number] Outputs the Explode distance to other modules
  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.
  • Surface [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls Z Scale and Explode distance
  • Surface Settings: controls translation, hierarchy and surface selection
  • Data Settings: controls clipping, processing (Log) and clamping of input data and kriged outputs.

lithologic modeling

lithologic modeling is an alternative geologic modeling concept that uses geostatistics to assign each cell’s lithologic material as defined in a pregeology (.pgf) file, to cells in a 3D volumetric grid.

There are two Estimation Types:

  • Nearest Neighbor is a quick method that merely finds the nearest lithology sample interval among all of your data and assigns that material. It is very fast, but generally should not be used for your final work.
  • Kriging provides the rigorous probabilistic approach to geologic indicator kriging. The probability for each material is computed for each cell center of your grid. The material with the highest probability is assigned to the cell. All of the individual material probabilities are provided as additional cell data components. This will allow you to identify regions where the material assignment is somewhat ambiguous. Needless to say, this approach is much slower (especially with many materials), but often yields superior results and interesting insights.

There are also two Lithology Methods when Kriging is selected.

  • The default method is block. This method is the quickest since probabilities are assigned directly to cells, and lithology is therefore determined based on the highest probability among all materials. However the resulting model is “lego-like” and therefore requires high grid resolutions in x, y & z in order to give good looking results.
  • The other method is Smooth. With Smooth, probabilities are assigned to nodes. In much the same way as analytical data, nodal data for probabilities provides an inherently higher effective grid resolution because after kriging probabilities to the nodes, there is an additional step where we “Smooth” the grid by interpolating between the nodes, cutting the blocky grid and forming a new smooth grid. MUCH lower grid resolutions can be used, often achieving superior results.

Module Input Ports

  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.
  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Refine Distance [Number] Accepts the distance used to discretize the lithologic intervals into points used in kriging.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Field [Field] Contains the volumetric cell based indicator geology lithology (cell data representing geologic materials).
  • Filename [String / minor] Outputs a string containing the file name and path. This can be connected to other modules to share files.
  • Refine Distance [Number] Outputs the distance used to discretize the lithologic intervals into points used in kriging or displayed in post_samples as spheres.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Grid Settings: control the grid type, position and resolution
  • Krig Settings: control the estimation methods
    • NOTE: The Quick Method assigns the lithologic material cell data based on the nearest lithologic material (in anisotropic space) to your PGF borings. This is done based on the cell center (coordinates) and an enhanced refinement scheme for the PGF borings. In general the Quick Method should not be used for final results

Advanced Variography Options:

It is far beyond the scope of our Help to attempt an advanced Geostatistics course. The terminology and variogram plotting style that we use is industry standard and we do so because we will not provide detailed technical support nor complete documentation on these features, which would effectively require a geostatistics textbook, in our help.

However, we have offered an online course on how to take advantage of the complex, directional anisotropic variography capabilities in 3d estimation (which applies equally well to lithologic modeling and adaptive_indicator_krig), and that course is available as a recorded video class. This class is focused on the mechanics of how to employ and refine the variogram anisotropy with respect to your data and the physics of your project such as contaminated sediments in a river bottom. The variogram is displayed as an ellipsoid which can be distorted to represent the Primary and Secondary anisotropies and rotated to represent the Heading, Dip and Roll. Overall scale and translation are also provided as additional visual aids to compare the variogram to the data, though these do not affect the actual variogram.

We are not hiding this capability from you as the Anisotropic Variography Study folder of Earth Volumetric Studio Projects contains a number of sample applications which demonstrate exactly what is described above. However, we assure you that understanding how to apply this to your own projects will be quite daunting and really does require a number of prerequisites:

  • A thorough explanation of these complex applications
  • A reasonable background in Python and how to use Python in Studio
  • An understanding of all of the variogram parameters and their impact on the estimation process on both theoretical datasets as well as real-world datasets.

This 3 hour course addresses this issues in detail.

Discussion of Lithologic (Geologic Indicator Kriging) vs. Stratigraphic (Hierarchical) Geologic Modeling

Stratigraphic geologic modeling utilizes one of two different ASCII file formats (.geo and .gmf) which contain “interpreted” geologic information. These two file formats both describe points on each geologic surface (ground surface and bottom of each geologic layer), based on the assumption of a geologic hierarchy.

The easiest way to describe geologic hierarchy is with an example. Consider the example below of a clay lens in sand with gravel below. Some borings will see only sand above the gravel, while others will reveal an upper sand, clay, and lower sand.

image\\heir1.jpg image\\heir1.jpg

The geologic hierarchy for this site will be upper sand, clay, lower sand, and gravel. This requires that the borings with only sand (above the gravel) be described as upper sand, clay, and lower sand, with the clay described as being zero thickness. For this simple example, determining the hierarchy is straightforward. For some sites (as will be discussed later) it is very difficult or even impossible.

image\\heir2.jpg image\\heir2.jpg

For those sites that can be described using the above method, it remains the best approach for building a 3D geologic model. Each layer has smooth boundaries and the layers (by nature of hierarchy) can be exploded apart to reveal the individual layer surface features. In the above example, the numbers represent the layer numbers for this site (even though layers 0 and 2 are both sand). Two examples of much more complex sites that are best described by this original approach are shown below.

Geologic Example: Sedimentary Layers and Lenses

image\\evslayr1_wmf.jpg image\\evslayr1_wmf.jpg

Geology Example & Figure: Outcrop of Dipping Strata

EVS is not limited to sedimentary layers or lenses. The figure below shows a cross-section through an outcrop of dipping geologic strata. EVS can easily model the layers truncating on the top ground surface.

image\\evslayr2_wmf.jpg image\\evslayr2_wmf.jpg

However, many sites have geologic structures (plutons, karst geology, sand channels, etc.) that do not lend themselves to description within the context of hierarchical layers. For these sites, Geologic Indicator Kriging (GIK) offers the ability to build extremely complex models with a minimum of effort (and virtually no interpretation) on the part of the geologist. GIK can also be a useful check of geologic hierarchies developed for sites that do lend themselves to a model based upon hierarchical layers.

GIK uses raw, uninterpreted 3D borings logs as the input file. The .pgf (pre-geology file) format is used to represent these logs. PGF files contain descriptions of each boring with x,y, & z coordinates for ground surface and the bottom of each observed geologic unit. Consecutive integer values (e.g. 0 through n-1, for n total observed units in the site) are used to describe each material observed in the entire site.

NOTE: It is important to start your material ID numbering at zero (0) instead of 1.

Usually, materials are numbered based upon a logical classification (such as porosity or particle size), however the numbering can be arbitrary as long as the numbers are consecutive (don’t leave numbers out of the sequence). For the example given above, we could number the materials as shown in the figure below (even though it is not a numbering sequence based on porosity or particle size).

image\\heir3.jpg image\\heir3.jpg

For a .pgf file, borings that do not see the clay (material 2 in the figure) would not need to consider the sand as being divided into upper and lower. Rather, every boring is merely a simple ASCII representation of the raw borings logs. The only interpretation involves classification of the observed soil types in each boring and assigning an associated numbering scheme.

mask horizons

mask horizons receives geologic input into its left input port and an optional input masking surface into its right port.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Area [Field] Accepts a field defining a surface of the area for masking

Module Output Ports

  • Output Field [Field] Outputs the processed field.

NOTE: The mask is normally applied to the first surface only. If this surface is removed, the mask is lost. However the “Allow Subsetting” toggle will apply the mask to all horizons, but it will slow down processing and use more memory.

edit_horizons is an interactive module which allows you to probe points to be selectively added to the creation of each and every stratigraphic horizon. This provides the ability to manual edit horizon surfaces prior to the creation of geologic models.

The method of connecting edit_horizons is unique among our modules. It uses the pink output port from gridding and horizons as its primary input, and it also requires the purple side port from viewer since it requires interactive probing. Its blue output port then becomes equivalent to the blue output of gridding and horizons, but with edited surfaces.

Regardless of the estimation method used originally, edit_horizons uses Natural Neighbor to perform its near-real-time modifications. For this reason, there is a Use Gradients toggle at the top of the user interface, which is identical in function to the one in gridding and horizons.

The other important parameter at the top of the user interface is the Horizon Point Radius. The default (linked) value for this parameter is computed for you as 2% of the X-Y diagonal extents of your input geology. If any of the original data points for the selected horizon being edited fall within the Horizon Point Radius, then we don’t use your probed point based on the assumption that the original data is more defensible and should take precedence.

Next there is the Probe Action, which has 3 options:

  1. None (default state when the module is instanced)
  2. Reset Position (allows you to move points)
  3. Add Point (allows you to add new surface control points for the selected horizon)

The Horizons list shows all of your geologic horizons. Here, you select the horizon surface you wish to modify. The points that you add only affect the selected horizon. When you change the selected horizon, you can add new points for that surface. You are able to add as many points as you need for any or all of the horizons.

The Horizon Point List is the list of points that you have added by probing in your model. You can only probe on actual objects. These objects can be surfaces from horizons, slices, tubes, or whatever objects you’ve added to your viewer. Slices are very useful since you can move them where you need them so you can probe points at specific coordinates. You are also able to manually change the X, Y, and/or Z coordinates or any point as needed. For each point, a Note: box is provided so you can keep a record of your actions and reasons.

horizon_ranking

The horizon_ranking module is used to give the user control over individual surface priorities and rankings. This allows the user to fine tune their hierarchy in ways much more complex than a simple top-down or bottom-up approach.

Module Input Ports

horizon_ranking has one input port which receives geologic input from modules like gridding and horizons

Module Output Ports

horizon_ranking has one output port which outputs the geologic input with re-prioritized hierarchy

Module Input Ports

  • Input Field [Field] Accepts a data field from 3d estimation or other similar modules.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as edges
  • Geologic legend Information [Geology legend] Outputs the geologic material information

material_mapping

This module can re-assign data corresponding to:

  • Geologic Layer
  • Material ID
  • Indicator
  • Adaptive Indicator

for the purpose of grouping. This provides great flexibility for exploding models or coloring.

Groups are processed from Top to Bottom. You can have overlapping groups or groups whose range falls inside a previous group. In that event, the lower groups override the values mapped in a higher group.

For example, if you have ten material ids (0 through 9) and you want to have them all be 0 except for 5 & 6 which should be 1, this can be accomplished with two groups:

  1. From 0 to 9 Map to 0
  2. From 5 to 6 Map to 1

Please note that in the animator, you can animate these values. Each group has From, To and Map To values that are numbered zero through eleven (e.g. From0, MapTo5)

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the processed field.

combine horizons

The combine horizons module is used to merge up to six geologic horizons (surfaces) to create a field representing multiple geologic layers.

The mesh (x-y coordinates) from the first input field, will be the mesh in the output. The input fields should have the same scale and origin, and number of nodes in order for the output data to have any meaning.

It also has a Run toggle (to prevent downstream modules from firing during input setting changes).

combine horizons provides an important ability to merge sets of surfaces or add additional surfaces to geologic models. It is important to understand the consequences of doing so and the steps that must be taken. The Brown-Grey-Light Brown-Beige port contains the material_ID numbers and names and it is important that the content of this port reflect the current set of surfaces/layers reflected in the geology. When Material_ID or Geo_Layer is presented in a legend, this port is necessary to automatically provide the layer names. When combine horizons is used to construct modified geologic horizons, its Geologic legend Information port MUST be used vs. the same port in gridding and horizons

Module Input Ports

  • Input Geologic Field [Field] Accepts a field with data whose grid will be exported.
  • Input Field 1 [Field] Accepts a data field.
  • Input Field 2 [Field] Accepts a data field.
  • Input Field 3 [Field] Accepts a data field.
  • Input Field 4 [Field] Accepts a data field.
  • Input Field 5 [Field] Accepts a data field.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Geologic Field [Field] Outputs the field with selected data
  • Output Object [Renderable]: Outputs to the viewer.

subset horizons

The subset horizons module allows you to subset the output of gridding and horizons so that downstream modules (3d estimation, horizons to 3d, Geologic Surface) act on only a portion of the layers kriged.

subset horizons is used to select a subset of the layers (and corresponding surfaces) export from gridding and horizons. This is useful if you want (need) to krige parameter data in each geologic layer separately.

This is not normally needed with contaminant data, but when you are kriging data such as porosity that is inherently discontinuous across layer boundaries, it is essential that each layer be kriged with data collected ONLY within that layer.

subset horizons eliminates the need for multiple gridding and horizons modules reading data files that are subsets of a master geology. Inserting subset horizons between gridding and horizons and 3d estimation allows you to select one or more layers from the geology.

This functionality is very useful when you want to krige groundwater and soil data using a single master geology file that represents both the saturated and unsaturated zones.

Module Input Ports

  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Geologic Field [Field] Can be connected to the 3d estimation, 3D_Geology Map, and surface from horizons(s) modules.

collapse horizons

The collapse horizons module allows you to subset the output of gridding and horizons so that downstream modules (3d estimation, horizons to 3d, Geologic Surface) act on only a single merged layer.

collapse horizons is used to merge all layers (and corresponding surfaces) export from gridding and horizons into a single layer (topmost and bottommost surfaces).

collapse horizons eliminates the need for multiple gridding and horizons modules reading data files that are single layer subset of a master geology. Inserting collapse horizons between gridding and horizons and 3d estimation kriges all data into a single geologic layer. When used with subset horizons it allows for creating a single layer that represents a only a portion (subset) of the master geology file.

Module Input Ports

  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Geologic Field [Field] Can be connected to the 3d estimation, 3D_Geology Map, and surface from horizons(s) modules.

displace_block

displace_block receives any 3D field into its input port and outputs the same field translated in z according to a selected nodal data component of an input surface allowing for non-uniform fault block translation.

This module allows for the creation of tear faults and other complex geologic structures. Used in conjunction with distance to surface it makes it possible to easily model extremely complex deformations.

Warning

When displacing 3D grids, especially those with poor aspect cells (much thinner in Z than X-Y), if the displacement surface has high slopes, the cells can be sheared severely. This can create corrupted cells which can result in inaccurate volumetric computation. In general volumes and masses are best computed before displacement.

Module Input Ports

  • Input Field [Field] Accepts a volumetric field
  • Input Surface [Field] Accepts a 2D surface grid with elevation nodal data. This type of grid is created by gridding and horizons and import raster as horizon.

Module Output Ports

  • Output Field [Field] Outputs the displaced field
  • Output Object [Renderable]: Outputs to the viewer.
  • post samples

    post_samples The post_samples module is used to visualize: Sampling locations and the values of the properties in .apdv files The lithology specified in a .pgf, .lsdv, .lpdv or .geo files The location and values of well screens in a .aidv file Warning When using the Datamap parameters (Minimum and Maximum) unlinked such the the resulting datamap is a subset of the true data range, probing in C Tech Web Scenes will only be able to report values within the truncated data range. Values outside that limited range will display the nearest value within the truncated range.

  • explode and scale

    explode_and_scale The explode_and_scale module is used to separate (or explode) and apply a scaling factor to the vertical dimension (z-coordinate) of objects in a model. explode_and_scale can also translate the fields in the z direction, and control the visibility of individual cell sets (e.g. geologic layers). Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules Explode [Number] Accepts the Explode distance from other modules Input Field [Field] Accepts a data field from 3d estimation or other similar modules. Module Output Ports

  • plume shell

    plume shell The plume_shell module creates the external faces of a volumetric subset of a 3D input. The resulting closed volume “shell” generally is used only as a visualization of a plume and would not be used as input for further subsetting or volumetric computations since it is hollow (empty). This module creates a superior visualization of a plume as compared with other modules such as plume passing to external_faces and is quicker and more memory efficient.

  • intersection shell

    intersection_shell The intersection_shell is a powerful module that incorporates some of the characteristics of plume_shell, yet allows for a large number of sequential (serial) subsetting operations, just like intersection. To get the functionality of (the now deprecated) constant_shell module, you would turn off Include Varying Surface. Because this module has “intersection” in its name, it allows you to add any number of subsetting operations.

  • change minmax

    change_minmax The change_minmax module allows you to override the minimum and/or maximum data values for coloring purposes. This functionality is commonly needed when working with time-series data. For example, the user can set the minmax values to bracket the widest range achieved for many datasets, thus allowing consistent mapping from dataset to dataset during a time-series animation or individual sub-sites.

  • band data

    band data band data provides a means to color surfaces or volumetric objects (converted to surfaces) in solid colored bands. band data can contour by both nodal and cell data. This module does not do subsetting like plume_shell , plume. It is used in conjunction with these modules to change the way their output is colored.

  • volume renderer

    volume_renderer Volume_renderer directly renders a 3D uniform field using either the Back-to-Front (BTF) or Ray-tracing volume rendering techniques. The Ray-tracing mode is available to both OpenGL and the software renderer. The BTF renderer, which is configured as the default, is available only in the OpenGL renderer. NOTE: This module and its rendering technique are not supported in C Tech Web Scenes (CTWS files).

  • opacity by nodal data

    opacity by nodal data opacity by nodal data provides a means to adjust the opacity (1 - transparency) of any object based on its data values using a simple ramp function which assigns a starting opacity to values less than or equal to the Level Start and an ending opacity to values greater than or equal to the Level End. The appearance of the resulting output is often similar in appearance to volume rendering. opacity by nodal data converts data into partially transparent surfaces where data values at each point in a grid are represented by a particular color and opacity.

  • slope and aspect

    slope_and_aspect The slope_and_aspect module determines the slope and aspect of a surface. The slope is the angle between the surface and the horizon. The aspect is the cardinal direction in degrees (rotating clockwise with 0° being North) that the slope is facing. Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration). Input Field [Field] Accepts a field with scalar or vector data. Module Output Ports

  • select single data

    select single data The select single data module extracts a single data component from a field. select single data can extract scalar data components or vector components. Scalar components will be output as scalar components and vector components will be output as vector components. Module Input Ports Input Field [Field] Accepts a data field. Module Output Ports Output Field [Field] Outputs the subsetted field as faces. Output Object [Renderable]: Outputs to the viewer.

  • import wavefront obj

    The import _wavefront_obj module will only read Wavefront Technologies format .OBJ files which include object textures which are represented (includ

Subsections of Display

post_samples

The post_samples module is used to visualize:

  • Sampling locations and the values of the properties in .apdv files
  • The lithology specified in a .pgf, .lsdv, .lpdv or .geo files
  • The location and values of well screens in a .aidv file
Warning

When using the Datamap parameters (Minimum and Maximum) unlinked such the the resulting datamap is a subset of the true data range, probing in C Tech Web Scenes will only be able to report values within the truncated data range. Values outside that limited range will display the nearest value within the truncated range.

Along with a representation of the borings from which the samples/data were collected. The post_samples module has the capability to process property values to make the posted data values consistent with data used in kriging modules. Data can be represented as spheres or any user specified glyph. The sampling locations may be colored and sized according to the magnitude of the property value, and labels can be applied to the sampling locations with several different options.

Each sampling location can probed for data by holding the Ctrl button and left-clicking on the sample location.

When you read any of the supported file types, the module automatically selects the proper default settings to display that data type. However, some file formats can benefit from different options depending on your desires and the quantity of data present.

Below is the Properties window for post samples after reading a .PGF file. Note that “Samples” and “Screens” are selected.

The result in the viewer is below.

If we turn on Well Labels and Sample Labels (with some subsetting to declutter), the viewer shows:

The post_samples module can also represent downhole geophysical logs or Cone Penetration Test (CPT) logs with tubes which are colored and/or sized according to the magnitude of the data. It can display nonvertical borings and data values collected along their length, and can also explode borings and sample locations to show their correct position within exploded geologic layering.

When used to read geology files, post_samples will place surface indicators at the top (ground) surface and the bottom of each geologic layer that are colored according to the layer they depict. When a geology file (.geo or .gmf) is exploded without using geologic surface input from gridding and horizons there will be surface indicators at the top and bottom of each layer. You may color the borings by lithology.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Explode [Number] Accepts the Explode distance from other modules
  • Date [Number] Accepts a date to interpolate time domain data
  • Input Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.
  • Subsetting Feature [Field] Accepts a (1D) line or (2D) surface to be used to subset the borings.
  • Sample Glyph [Field] Allows the user to import a field containing a geometric object which will be the glyph displayed at each sample location.
  • Filename [String / minor] Allows the sharing of file names between similar modules.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Explode [Number] Outputs the Explode distance to other modules
  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Boring Tubes [Field / minor] Outputs the tube paths as lines with data
  • Boring Data [Field / minor] Outputs the tube paths as lines with data
  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Analytes Name [String / minor] Outputs a string containing the name of the currently selected analyte or date
  • Sample Data [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: 
    • selected data component
    • Z Scale (remember this should be set on the Home tab or Application properties)
    • Display Toggles: including screens, borings, well and/or sample labeling, etc.
  • Sample Settings: controls clipping, processing (Log) and clamping of data.
  • Collapse to 2D: controls how 3D data is subset to 2D
  • Geology Settings: controls the display of geologic data
  • Time Settings: controls how the module deals with time domain data
  • Boring Tube Settings: controls how borings are displayed
  • Color Tube Settings: controls the display of colored tubes as an alternative representation (vs. spheres or glyphs)
  • Label Settings: parameters association with labeling of borings and samples
    • Remember that toggles to turn on labels for Wells or Samples are at the top in Properties.
    • There are type-ins which receive Python expressions providing a great deal of power to adjust how you label wells and samples
    • Default expressions (which vary by file type) are automatically created for you (when these are Linked) to make the labeling more logical.

EXAMPLE .PT FILE & OUTPUT

TRUETYPE, 20.000000, LC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, False, 0.200000, 5.000000, None, "Noto Sans"
0.000000, 300.000000, 0.000000, 0, "TTF"
TRUETYPE, 20.000000, LC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, False, 0.200000, 5.000000, Bold, "Noto Sans"
250.000000, 300.000000, 0.000000, 0, "TTF Bold"
TRUETYPE, 20.000000, LC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, False, 0.200000, 5.000000, Italic, "Noto Sans"
500.000000, 300.000000, 0.000000, 0, "TTF Italic"
TRUETYPE, 20.000000, LC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, False, 0.200000, 5.000000, Bold Italic, "Noto Sans"
750.000000, 300.000000, 0.000000, 0, "TTF Bold Italic"
TRUETYPE, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, True, 0.200000, 5.000000, None, "Noto Sans"
0.000000, 200.000000, 0.000000, 0, "Outlined"
TRUETYPE, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, True, 0.200000, 5.000000, Bold, "Noto Sans"
250.000000, 200.000000, 0.000000, 0, "Outlined Bold"
TRUETYPE, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, True, 0.200000, 5.000000, Italic, "Noto Sans"
500.000000, 200.000000, 0.000000, 0, "Outlined Italic"
TRUETYPE, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, True, 0.200000, 5.000000, Bold Italic, "Noto Sans"
750.000000, 200.000000, 0.000000, 0, "Outlined Bold Italic"
TRUETYPE, 20.000000, UC, 180.000000, 90.000000, 45.000000, 0.847059, 0.847059, 0.858824, 0.050000, False, 0.200000, 5.000000, Bold, "Noto Sans"
250.000000, 400.000000, 0.000000, 0, "TTF Bold at 45° roll"
LINEFONT, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, None
0.000000, 100.000000, 0.000000, 0, "Singleline"
LINEFONT, 20.000000, UC, 180.000000, 90.000000, 0.000000, 0.847059, 0.847059, 0.858824, 0.050000, Italic
500.000000, 100.000000, 0.000000, 0, "Singleline Italic"
FORWARDFACING, 0.847059, 0.847059, 0.858824, None
0.000000, 0.000000, 0.000000, 0, "ForwardFacing"
END
 

explode_and_scale

The explode_and_scale module is used to separate (or explode) and apply a scaling factor to the vertical dimension (z-coordinate) of objects in a model. explode_and_scale can also translate the fields in the z direction, and control the visibility of individual cell sets (e.g. geologic layers).

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Explode [Number] Accepts the Explode distance from other modules
  • Input Field [Field] Accepts a data field from 3d estimation or other similar modules.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Explode [Number] Outputs the Explode distance to other modules
  • Output Field [Field / minor] Outputs the field with the scaling and exploding applied.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the scaling, exploding and Z translation
  • Explode And Scale Settings: controls layer exploding and cell sets

plume shell

The plume_shell module creates the external faces of a volumetric subset of a 3D input. The resulting closed volume “shell” generally is used only as a visualization of a plume and would not be used as input for further subsetting or volumetric computations since it is hollow (empty). This module creates a superior visualization of a plume as compared with other modules such as plume passing to external_faces and is quicker and more memory efficient.

Info

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Isolevel [Number] Accepts the subsetting level.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as a closed surface.
  • Status [String / minor] Outputs a string containing a description of the operation being performed (e.g. TCE plume above 4.00 mg/kg)
  • Isolevel [Number] Outputs the subsetting level.
  • Plume [Renderable]: Outputs to the viewer.

intersection_shell

The intersection_shell is a powerful module that incorporates some of the characteristics of plume_shell, yet allows for a large number of sequential (serial) subsetting operations, just like intersection.

To get the functionality of (the now deprecated) constant_shell module, you would turn off Include Varying Surface.

Because this module has “intersection” in its name, it allows you to add any number of subsetting operations.

Each operation can be “Above” or “Below” the specified Threshold value, which in Boolean terms corresponds to:

  • A and B where both the A & B operations are set to Above or
  • A and (NOT B) where the A operation is set to above and the B operation is set to Below.

However the operator is always “and” for intersection modules. If you need an “or” operator to achieve your subsetting, you need the union module.

This module creates an efficient and superior visualization of a plume that can be sent directly to the viewer for rendering. The intersection_shell module outputs a specialized version of a sequentially subset plume that is suitable for VRML export for 3D printing to create full color physical models.

For output to 3D printing, please jump to the Issues for 3D Printing topic.

Without intersection_shell it is very difficult if not impossible to create a VRML file suitable for printing, especially with complex models.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as a closed surface.
  • Output Object [Renderable]: Outputs to the viewer.

intersection_shell is the module that can create an ISOSURFACE. In other words, a surface (not volume) representing part(s) of your plume.

It has two (+) toggles which control the visibility of a plume “shell”.

In general a plume external shell has two components:That portion which is exactly EQUAL to the Subsetting LevelThat portion which is greater than the Subsetting Level

When both toggles are on (default) the plume is

If you display only the Constant Surface (component 1) you get this

If you display only the Varying Surface (component 2) you get this

change_minmax

The change_minmax module allows you to override the minimum and/or maximum data values for coloring purposes. This functionality is commonly needed when working with time-series data. For example, the user can set the minmax values to bracket the widest range achieved for many datasets, thus allowing consistent mapping from dataset to dataset during a time-series animation or individual sub-sites.

This way 100 ppm would always be red throughout the animation, and if some times did not reach a maximum of 100 ppm, there would be no red color mapping for those time-steps.

NOTE: The Clamp toggle actually changes the data. Use with caution as this will change volumetrics results.

Warning

When using unlinked values (Min and Max) such the the resulting datamap is a subset of the true data range, probing in C Tech Web Scenes will only be able to report values within the truncated data range. Values outside that limited range will display the nearest value within the truncated range.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the field with altered data min/max values
  • Output Object [Renderable]: Outputs to the viewer.

band data

band data provides a means to color surfaces or volumetric objects (converted to surfaces) in solid colored bands.

band data can contour by both nodal and cell data.

This module does not do subsetting like plume_shell , plume. It is used in conjunction with these modules to change the way their output is colored.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Contour Levels [Contours]: Accepts an array of values representing values to place contours

Module Output Ports

  • Output Field [Field] Outputs the field with altered data min/max values
  • Output Contour Levels [Contours]: Outputs an array of values representing values to be labeled in the legend.
  • Output Object [Renderable]: Outputs to the viewer.

volume_renderer

Volume_renderer directly renders a 3D uniform field using either the Back-to-Front (BTF) or Ray-tracing volume rendering techniques. The Ray-tracing mode is available to both OpenGL and the software renderer. The BTF renderer, which is configured as the default, is available only in the OpenGL renderer.

NOTE: This module and its rendering technique are not supported in C Tech Web Scenes (CTWS files).

The basic concept of volume rendering is quite different than anything other rendering technique in EVS. Volume_renderer converts data into a fuzzy transparent cloud where data values at each point in a 3D grid are represented by a particular color and opacity.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer.

opacity by nodal data

opacity by nodal data provides a means to adjust the opacity (1 - transparency) of any object based on its data values using a simple ramp function which assigns a starting opacity to values less than or equal to the Level Start and an ending opacity to values greater than or equal to the Level End. The appearance of the resulting output is often similar in appearance to volume rendering. opacity by nodal data converts data into partially transparent surfaces where data values at each point in a grid are represented by a particular color and opacity.

NOTE: Any module connected after opacity by nodal data MUST have Normals Generation set to Vertex (if there is a Normals Generation toggle on the module’s panel, it must be OFF).

  1. The leftmost port accepts an input field

Module Output Ports

  1. The output field which passes the original data with a special new “opacity” data component for use with downstream modules (e.g. slice, plume_shell, etc.)
  2. The (red) port for connection to the viewer.

slope_and_aspect

The slope_and_aspect module determines the slope and aspect of a surface. The slope is the angle between the surface and the horizon. The aspect is the cardinal direction in degrees (rotating clockwise with 0° being North) that the slope is facing.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with scalar or vector data.

Module Output Ports

  • Output Field [Field] Outputs both slope and aspect data as a field
  • Output Slope Object [Renderable]: Outputs to the viewer.
  • Output Aspect Object [Renderable]: Outputs to the viewer.

select single data

The select single data module extracts a single data component from a field. select single data can extract scalar data components or vector components. Scalar components will be output as scalar components and vector components will be output as vector components.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Object [Renderable]: Outputs to the viewer.

The import _wavefront_obj module will only read Wavefront Technologies format .OBJ files which include object textures which are represented (included) as a single image file. Each file set is actually a set of 3 files which must always include the following 3 files types with the same base file name, which must be in the same folder:

  1. The .obj file (this is the file that we browse for)
  2. A .mtl (Material Template Library) file
  3. An image file (e.g. .jpg) which is used for the texture. Note: there must be only ONE image/texture file. We do not support multiple texture files.

This module provides the user with the capability to integrate complex photo-realistic site plans, buildings, and other 3D features into the EVS visualization, to provide a frame of reference for understanding the three dimensional relationships between the site features, and characteristics of geologic, hydrologic, and chemical features.

Info

This module intentionally does not have a Z-Scale port since this class of files are so often not in a user’s model projected coordinate system. Instead we are providing a Transform Settings group that allows for a much more complex set of transformations including scaling, translations and rotations.

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window includes the following parameters:

Texture Options: These allow you to enhance the image used for texturing to achieve the best looking final output.

Transform Settings: This allows you to add any number of Translation or Scale transformations in order to place your Wavefront Object in the same coordinate space as the rest of your “Real-World” model. It is very typical that Wavefront Objects are in a rather arbitrary local coordinate system that will have no defined transformation to any standard coordinate projection.

Generally you should know if the coordinates are feet of meters and if those are not correct, do that scaling as your first set of transforms.

It will be up to you to determine the set of translations that will properly place this object in your model. Hopefully rotations will not be required, but they are possible with the Transform List.

  • volumetrics

    volumetrics The volumetrics module is used to calculate the volumes and masses of soil, and chemicals in soils and ground water, within a user specified constant_shell (surface of constant concentration), and set of geologic layers. The user inputs the units for the nodal properties, model coordinates, and the type of processing that has been applied to the nodal data values, specifies the subsetting level and soil and chemical properties to be used in the calculation, and the module performs an integration of both the soil volumes and chemical masses that are within the specified constant_shell. The results of the integration are displayed in the EVS Information Window, and in the module output window.

  • cell volumetrics

    cell_volumetrics The cell_volumetrics module provides cell by cell volumetrics data. It creates an extremely large output file with volume, contaminant mass and cell centers for every cell in the grid. Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration). Explode [Number] Accepts the Explode distance from other modules Input Field [Field] Accepts a field with data. String for Output [String] Input Subsetting Level [Number] Accepts the subsetting level Module Output Ports

  • compute surface area

    compute surface area The compute surface area module is used to calculate the areas of the entire field input. The input data to compute surface area must be a two dimensional data field output from krig_2d, slice, or any subsetting module which outputs two-dimensional data (slice, plume with 2D input, or plume_shell). The results of the integration are updated each time the input changes.

  • file statistics

    file_statistics The file_statistics module is used to check the format of: *.apdv; *.aidv; *.geo; *.gmf; *.vdf; and *.pgf files, and to calculate and display statistics about the data contained in these files. This module also calculates a frequency distribution of properties in the file. During execution, file_statistics reads the file, displays an error message if the file contains errors in format or numeric values, and then displays the statistical results in the EVS Information window

  • statistics

    statistics The statistics module is used to analyze the statistical distribution of a field with nodal data. The data field can contain any number of data components. Statistical analyses can only be performed on scalar nodal data components. An error occurs if a statistical analysis is attempted on vector data. Output from the statistics module appears in the EVS Information Window. Output consist of calculated min and max values, the mean and standard deviation of the data set, the distribution of the data set, and the coordinate extents of the model.

  • site planning

    Delete this text and replace it with your own content.

Subsections of Analysis

volumetrics

The volumetrics module is used to calculate the volumes and masses of soil, and chemicals in soils and ground water, within a user specified constant_shell (surface of constant concentration), and set of geologic layers. The user inputs the units for the nodal properties, model coordinates, and the type of processing that has been applied to the nodal data values, specifies the subsetting level and soil and chemical properties to be used in the calculation, and the module performs an integration of both the soil volumes and chemical masses that are within the specified constant_shell. The results of the integration are displayed in the EVS Information Window, and in the module output window.

The volumetrics module computes the volume and mass of everything passed to it. To compute the volume/mass of a plume, you must first use a module like plume or intersection to subset your model.

NOTE: Do not use plume_shell or intersection_shell upstream of volumetrics since their output is a hollow shell without any volume.

The volumetrics module computes volumes and masses of analytes using the following method:

  • Each cell within the selected geologic units is analyzed
  • The mass of analyte within the cell is integrated based on concentrations at all nodes (and computed cell division points)
  • The volumes and masses of all cells are summed
  • Centers of mass and eigenvectors are computed
  • For soil calculations the mass of analyte is directly computed from the computed mass of soil (e.g. mg/kg). This is affected by the soil density parameter (all densities should be entered in gm/cc).
  • For groundwater calculations, the mass of analyte (Chemical Mass) is computed by first determining the volume of water in each cell. This uses the porosity parameter and each individual cell’s volume. From the cell’s water volume, the mass of analyte is directly computed (e.g. mg/liter).
  • The volume of analyte (Chemical Volume) is computed from the Chemical Mass using the “Chem Density” parameter (all densities should be entered in gm/cc).

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Explode [Number] Accepts the Explode distance from other modules
  • Input Field [Field] Accepts a field with data.
  • String for Output [String]
  • Input Subsetting Level [Number] Accepts the subsetting level

Module Output Ports

  • Output Subsetting Level [Number] Outputs the subsetting level
  • Soil Volume Level [Number] Outputs the computed soil volume
  • Soil Mass Level [Number] Outputs the computed soil mass
  • Chemical Volume Level [Number] Outputs the computed chemical volume
  • Chemical Mass Level [Number] Outputs the computed chemical mass
  • Nodal Data Component [String] The name of the analyte
  • Volume Units [String] The units of the volume calculations (e.g. m3)
  • Result Value [Number] The final output
  • Output Second Moment Object [Renderable]: Outputs to the viewer

You can use the Geologic Layers selection list which allows you to choose the cell sets (geologic layers) that you want to perform computations on.

The Soil Density and Porosity inputs allow the user to input the properties of the soil matrix in which the chemicals reside. Note that if the mass of chemicals in a combined soil and ground water plume are to be estimated, one of the geologic layers should be set up to have a boundary within it that corresponds to the water table position. In essence, this will create two layers out of one geologic unit that can be used to separate the soil domain from the ground water domain. The user can then choose the appropriate Nodal Data Units for each layer in the two domains, and obtain volumetrics estimates by summing the results in individual layers. There are several other alternative methods for completed volumetrics estimates in continuous soil and ground water plumes, which involve either setting up separate soil and ground water models, or using the Field Math module to remove and include specified areas of the domains.

The Chemical Density input allows the user to input the density of the chemical constituent for which mass estimates are being completed. Note that this value is used to calculate the volume of chemical in the specified constant_shell, as the mass units are calculated directly from the nodal data.

Volume Dollars is used along with the total volume of the chemical to indicate the cost of the removal of the chemical.

Mass Dollars is used, along with the total chemical mass, to determine the value of the chemical mass.

Volume Units is used to select which units the volume should be calculated in. For the Specified Unit Ratio the units to convert to are liters. For example if your units were Cubic Meters the ratio would be 1000.

Mass Units is used to select which units the mass should be calculated in. For the Specified Unit Ratio the units to convert to are Kilograms.

The Output Results File toggle causes volumetrics to write a file to the ctech folder (volumetrics_results.txt) that contains all volumetrics information in a format suitable for input to programs like Excel (tab delimited .txt file). This file is written to in an append mode. It will grow in size as you use volumetrics. You should delete or move the file when you’re done with it.

The Run Automatically toggle, when selected, causes the module to run as soon as any of the input parameters have changed. When not selected the accept button must be pushed for the module to run.

There is an advanced window that can be opened by checking the Advanced Output Options toggle.

The advance panel provides many capabilities including Spatial Moment Analysis.

  • Spatial Moment Analysis involves computing the zeroth, first, and second moments of a plume to provide measures of the mass, location of the center of mass, and spread of the plume.
  • The zeroth moment is a mass estimate for each sample event and COC. The estimated mass is used to evaluate the change in total mass of the plume over time.
  • The first moment estimates the center of mass of the plume (as coordinates Xc , Yc, & Zc).
  • The second moment indicates the spread of the contaminant about the center of mass (sxx,syy andszz), or the distance of contamination from the center of mass. This is somewhat analogous to the standard deviation of the plume along three orthogonal axes represented as an ellipsoid created using the eigenvalues as the ellipsoid major and minor axes, and the eigenvectors to orient the ellipsoid. The orientation of the ellipsoid is aligned with the primary axis of the plume (not the coordinate axes).
  • The Second Moment ellipsoid represents the spread of the plume in the x, y and z directions. Freyberg (1986) describes the second moment about the center of mass as the spatial covariance tensor.
  • The components of the covariance tensor are indicative of the spreading of the contaminant plume about the center of mass. The values of sxx,syy andszz represent the axes of the covariance ellipsoid. The volumetrics module provides a scaling parameter that allows you to view the ellipsoid corresponding to the one-sigma (default) or higher sigma (higher confidence) representation of the contaminant spread.

The Water Density type in window allows the user to specify the density of water. The default of 0.9999720 g/mL (gm/cc) is the Density of Water at 4.5 degrees Celsius.

The Output Filetype radio list is used to select the format of the output file. The default is a tab spaced single line output, the second choice will format the output the same as the display window, and the third option will format the output separated by tabs on multiple lines. Changing these options will not cause the module to run, you must hit accept or change an input value for the module to run.

Overwrite causes the output file to be overwritten instead of appended to. This toggle will only be selected for one run and then will unselect itself and begin appending again, unless it is rechecked. Selecting this toggle will not cause the module to run, you must hit accept or change an input value for the module to run.

The Date type in allows you to set the date, which is output only in the Tabbed Multi-Line file.

Connecting the Red Output Port of volumetrics to the viewer will display the Second Moment Ellipsoid and the Eigenvectors (if turned on).

The three toggles:

  1. Display Mass Along Major Eigen Vector
  2. Display Mass Along Minor Eigen Vector
  3. Display Mass Along Interm(ediate) Eigen Vector

allow you turn on and off the lines lying along the Major, Minor, and Intermediate Eigenvectors. These vectors represent the second moment of mass, and by default have chemical data mapped to them. These lines are of the same orientation as the second moment ellipse but they stretch only to the extents of the model. To output these lines the Export Results button must be pushed.

The Segments In Lines type in allows you to control the number of segments making up each line, the larger the number of segments the closer the node data along the line will match the node data of the model.

The Color Lines by Axis toggle strips the node data from the lines leaving them colored by the axis the represent.

EllipsoidResolution is an integer value determines the number of faces used to approximate the analytically smooth ellipsoid. The higher the resolution the smoother the ellipsoid.

EllipsoidScale is a scaling factor for the second moment ellipsoid. A value of 1.0 (default) is analogous to one-sigma (67%) statistical confidence. Higher values would provide an indication of the size of the eigenvalues with a higher statistical confidence.

cell_volumetrics

The cell_volumetrics module provides cell by cell volumetrics data. It creates an extremely large output file with volume, contaminant mass and cell centers for every cell in the grid.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Explode [Number] Accepts the Explode distance from other modules
  • Input Field [Field] Accepts a field with data.
  • String for Output [String]
  • Input Subsetting Level [Number] Accepts the subsetting level

Module Output Ports

  • Output Subsetting Level [Number] Outputs the subsetting level

compute surface area

The compute surface area module is used to calculate the areas of the entire field input. The input data to compute surface area must be a two dimensional data field output from krig_2d, slice, or any subsetting module which outputs two-dimensional data (slice, plume with 2D input, or plume_shell). The results of the integration are updated each time the input changes.

Module Input Ports

  • Input Field [Field] Accepts a data field which is a surface.

Module Output Ports

  • Output Area [Number] The area in user units squared
  • Units [String] The units (e.g. ft or m)

file_statistics

The file_statistics module is used to check the format of: *.apdv; *.aidv; *.geo; *.gmf; *.vdf; and *.pgf files, and to calculate and display statistics about the data contained in these files. This module also calculates a frequency distribution of properties in the file. During execution, file_statistics reads the file, displays an error message if the file contains errors in format or numeric values, and then displays the statistical results in the EVS Information window

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Filename [String / minor] Allows the sharing of file names between similar modules.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Sample Data [Field / minor] Outputs the data as points (size of points can be controlled).
  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Mean Level [Number]Outputs the mean data value
  • Median Level [Number] Outputs the median data value
  • Min Level [Number] Outputs the minimum data value
  • Max Level [Number] Outputs the maximum data value
  • Number Of Points [Number] Outputs the number of points
  • Statistics [String / minor] Outputs a string containing the full output normally sent to the Information window
  • Sample Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Data Processing: controls clipping, processing (Log) and clamping of input data
  • Time Settings: controls how the module deals with time domain data

statistics

The statistics module is used to analyze the statistical distribution of a field with nodal data. The data field can contain any number of data components. Statistical analyses can only be performed on scalar nodal data components. An error occurs if a statistical analysis is attempted on vector data. Output from the statistics module appears in the EVS Information Window. Output consist of calculated min and max values, the mean and standard deviation of the data set, the distribution of the data set, and the coordinate extents of the model.

The first port (the leftmost one) should contain a mesh with nodal data. If no nodal data is present, statistics will only report the extents and centroid of your mesh. Data sent to the statistics module for analysis will reflect any data transformation or manipulation performed in the upstream modules. Any mesh data sent to the port is used for calculating the X, Y and Z coordinate ranges. The mesh coordinates have no affect on the data distribution. Cell based data is not used.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Input Geologic Field [Field] Accepts a data field from upon which statistics are computed

Module Output Ports

  • Mean Level [Number]Outputs the mean data value
  • Median Level [Number] Outputs the median data value
  • Min Level [Number] Outputs the minimum data value
  • Max Level [Number] Outputs the maximum data value
  • Number Of Points [Number] Outputs the number of points
  • Statistics [String / minor] Outputs a string containing the full output normally sent to the Information window

Delete this text and replace it with your own content.

  • legend

    legend The legend module is used to place a legend which help correlate colors to analytical values or materials . The legend shows the relationship between the selected data component for a particular module and the colors shown in the viewer. For this reason, the legend’s RED input port must be connected to the RED output port of a module which is connected to the viewer and is generally the dominant colored object in view.

  • 3d legend

    The legend module is used to place a legend which help correlate colors to analytical values or materials . The legend shows the relationship between

  • axes

    axes General Module Function The axes module is used to place 3D axes in the viewer scaled by the model data and/or user defined limits. Axes accepts data from many of the Subsetting and Processing modules and outputs directly to the viewer. Data passed to Axes should come from modules which have scaled or transformed the mesh data, for example explode_and_scale. Axes generated by axes and displayed in the viewer are transformable with other objects in the viewer.

  • direction indicator

    direction indicator The direction indicator module is used to place a 3D North Arrow or Rose Compass in the 3D viewer scaled by the model data and/or user defined parameters. Module Input Ports View[View] This is the primary Purple port which connects to the viewer to receive the extent of all objects in the viewer AND outputs the north arrow or compass rose. This port can be used as your only connection from direction indicator to the viewer and no other connections are needed. Minor Ports not needed for most all cases Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules Explode [Number] Accepts the Explode distance from other modules Module Output Ports

  • viewer to frame

    viewer to frame The viewer to frame module is used to place a image of one viewer inside a second viewer’s non-transformable overlay. It is extremely easy to use. There are sliders to adjust size and position. Module Input Ports View [View] Connects to the viewer used as an overlay Module Output Ports Output Object [Renderable] Outputs the input view as a 2D overlay in the viewer.

  • add logo

    add_logo The add_logo module is used to place a logo or other graphic object in the viewer’s non-transformable overlay. It is extremely easy to use. There are sliders to adjust size and position and a button to select the image file to use as a logo. Module Input Ports View [View] Connects to the viewer Module Output Ports

  • titles

    titles Titles connects to the red port on the viewer and provides a means to place text in the non-transformable 2D Overlay of the viewer. The text is not transformed by viewer transformations and is positioned using sliders in the Titles user interface. Module Input Ports Input String [String] Accepts the string to display. Number 1 [Number]: Accepts a number used to construct a the title. (this is effectively a simple version of format_string Number 2 [Number]: Accepts a number used to construct a the title String 1 [String]: Accepts a number used to construct a the title Module Output Ports

  • 3d titles

    3d titles 3d titles connects to the red port on the viewer and provides a means to place text in 3D space of your model. The text is transformed by viewer transformations and is positioned using X, Y & Z sliders in the Titles user interface. Module Input Ports Input String [String] Accepts the string to display. Module Output Ports

  • place text

    place_text place_text replaces both Text3D and MultiText3D and provides a means to interactively place 2D and 3D renderable text strings or to read a .PT File (or legacy .EMT file) to place the text. Module Input Ports View [View] This is the primary Purple port which connects to the viewer to receive the extent of all objects in the viewer AND outputs the test. This port can be used as your only connection from place_text to the viewer and no other connections are needed. Minor Ports not needed for most all cases Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules Explode [Number] Accepts the Explode distance from other modules Module Output Ports

  • interactive labels

    interactive_labels The interactive_labels module allows the user to place formatted labels at probed locations within the viewer. The data displayed is the data at the probed location Module Input Ports Z Scale [Number / minor] Accepts Z Scale (vertical exaggeration) from other modules Number Variable [Number / minor] Accepts a number to be used in the expression Input String Variable [String / minor] Accepts a string to be used in the expression View [View / minor] Connects to the viewer to allow probing on all objects. Module Output Ports

  • format string

    format_string format_string allows you to construct a complex string (for use in titles or as file names) using multiple string and numeric inputs. An expression determines the content of the output. The Expression is treated as Python f-string which allows for the use of the variables with Python expressions. Module Input Ports Date [Number] Accepts a date Number 1 [Number] Accepts a number Number 2 [Number] Accepts a number Number 3 [Number] Accepts a number Number 4 [Number] Accepts a number String 1 [String] An input string String 2 [String] An input string String 3 [String] An input string Module Output Ports

Subsections of Annotation

legend

The legend module is used to place a legend which help correlate colors to analytical values or materials . The legend shows the relationship between the selected data component for a particular module and the colors shown in the viewer. For this reason, the legend’s RED input port must be connected to the RED output port of a module which is connected to the viewer and is generally the dominant colored object in view.

Many modules with red output ports have a selector to choose which ONE of the nodal or cell data components are to be used for coloring. The name of the selected data component will be displayed as the Title of the legend if the Label Options are set to Automatic (default).

If the data component to be viewed is either Geo_Layer or Material_ID (for models where the grid is based upon geology), the Geologic legend Information port from gridding and horizons (or lithologic modeling) must also be connected to legend to provide the Geologic Layer (or material) names for automatic labeling. When this port is connected it will have no affect if any other data component is selected.

The minimum and maximum values are taken from the data input as defined in the datamap. Labels can be placed at user defined intervals along the color scale bar. Labels can consist of user input alphanumerical values or automatically determined numerical values.

Module Input Ports

  • Geologic legend Information [Geology legend] Accepts the geologic material information from modules that read geologic data.
  • Contour Levels [Contours]: Accepts an array of values representing values to be labeled in the legend.
  • Input Object [Renderable]: Accepts the output of a module to which the legend corresponds.

Module Output Ports

  • Output legend [Field] Outputs the legend as a field to allow texturing
  • Title Output [String] Can be connected to the 3d estimation, 3D_Geology Map, and surface from horizons(s) modules.
  • Output Object [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Label Options: controls the legend labeling
  • Scale Options: controls the legend size and placement

Text Formatting:

Text formatting can be performed with a very restrictive subset of Markdown Syntax

  • Bold **bold text**

  • Italic _italicized text_

  • Headings (Larger and bolder text)

    • H1

    • H2

    • H3

The legend module is used to place a legend which help correlate colors to analytical values or materials . The legend shows the relationship between the selected data component for a particular module and the colors shown in the viewer. For this reason, the legend’s RED input port must be connected to the RED output port of a module which is connected to the viewer and is generally the dominant colored object in view.

Many modules with red output ports have a selector to choose which ONE of the nodal or cell data components are to be used for coloring. The name of the selected data component will be displayed as the Title of the legend if the Label Options are set to Automatic (default).

If the data component to be viewed is either Geo_Layer or Material_ID (for models where the grid is based upon geology), the Geologic legend Information port from gridding and horizons (or lithologic modeling) must also be connected to legend to provide the Geologic Layer (or material) names for automatic labeling. When this port is connected it will have no affect if any other data component is selected.

The minimum and maximum values are taken from the data input as defined in the datamap. Labels can be placed at user defined intervals along the color scale bar. Labels can consist of user input alphanumerical values or automatically determined numerical values.

Module Input Ports

  • Geologic legend Information [Geology legend] Accepts the geologic material information from modules that read geologic data.
  • Contour Levels [Contours]: Accepts an array of values representing values to be labeled in the legend.
  • Input Object [Renderable]: Accepts the output of a module to which the legend corresponds.

Module Output Ports

  • Output legend [Field] Outputs the legend as a field to allow texturing
  • Title Output [String] Can be connected to the 3d estimation, 3D_Geology Map, and surface from horizons(s) modules.
  • Output Object [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Label Options: controls the legend labeling
  • Scale Options: controls the legend size and placement

Text Formatting:

Text formatting can be performed with a very restrictive subset of Markdown Syntax

  • Bold **bold text**

  • Italic _italicized text_

  • Headings (Larger and bolder text)

    • H1

    • H2

    • H3

axes

General Module Function

The axes module is used to place 3D axes in the viewer scaled by the model data and/or user defined limits. Axes accepts data from many of the Subsetting and Processing modules and outputs directly to the viewer. Data passed to Axes should come from modules which have scaled or transformed the mesh data, for example explode_and_scale. Axes generated by axes and displayed in the viewer are transformable with other objects in the viewer.

The User interface to axes is very comprehensive. Each coordinate direction axis can be individually controlled. Axis labels and tick marks for each axes can be specified. The label font, label precision, label orientation, and other label parameters are all user specified. Many of the parameters do not have default values that will produce the desired results because many variables control how the axes should be defined.

axes requires a field input to position and size the axes. If you disconnect the (blue/black) field input port, you no longer lose the axes bounds values and your axes remain in place. This is useful when field data changes in an animation so that you don’t constantly recreate the axes.

Also, the size of text and tick marks is based on a percentage of the x-y-z extent of the input field. This now allows you to set the extent of one or more axes to zero so you can have a scale of only one or two dimensions.

Module Input Ports

  • View[View] This is the primary Purple port which connects to the viewer to receive the extent of all objects in the viewer AND outputs the axes.
    • This port can be used as your only connection from axes to the viewer and no other connections are needed.
  • Input Geologic Field [Field] Accepts a field to receive the extent
  • Input Objects [Renderable]: Accepts a renderable output port to receive the extent
  • Minor Ports not needed for most all cases
    • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
    • Explode [Number] Accepts the Explode distance from other modules

Module Output Ports

  • Output Object [Renderable] Outputs the axes to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the scaling and exploding
  • Spatial Definition: Controls the extents and grid densities
  • Display Settings: controls layer exploding and cell sets
  • All Axes Settings: Controls parameters for XYZ simultaneously
  • X Axes Settings: Controls parameters for X axis
  • Y Axes Settings: Controls parameters for Y axis
  • Z Axes Settings: Controls parameters for Z axis

in_view (Purple) : This port accepts the output of the viewer directly. It will draw the axes around everything displayed in the viewer. This port will only cause the module to run when the port is connected or when the “Accept Current Values” button is pressed. If the models coordinate extents are going to change often then another input port should be used.

objects_in (Red) : This port accepts any number of (Red) output ports from other modules. When any of those modules are run the axes module will run as well.

meshs_in (Blue/Black) : This port accepts any number of (Blue/Black) output ports from other modules. When any of those modules are run the axes module will run as well.

explode (Grey/Green) : This port accepts a float value representing the explode distance from explode_and_scale. If you have an explode distance set to anything but 0, the Z axis tick labels are not printed.

z_scale (Grey/Brown) : This port accepts a float value representing Z exaggeration of the model from modules like explode_and_scale to ensure that the Z axis is correctly labeled.

direction indicator

The direction indicator module is used to place a 3D North Arrow or Rose Compass in the 3D viewer scaled by the model data and/or user defined parameters.

Module Input Ports

  • View[View] This is the primary Purple port which connects to the viewer to receive the extent of all objects in the viewer AND outputs the north arrow or compass rose.
    • This port can be used as your only connection from direction indicator to the viewer and no other connections are needed.
  • Minor Ports not needed for most all cases
    • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
    • Explode [Number] Accepts the Explode distance from other modules

Module Output Ports

  • Output For Transform [Renderable] Provides an additional output port if you want to duplicate place_text’s output via a transform_group module.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the scaling and positioning
  • North Arrow Settings:
  • Compass Rose Settings:

viewer to frame

The viewer to frame module is used to place a image of one viewer inside a second viewer’s non-transformable overlay. It is extremely easy to use.

There are sliders to adjust size and position.

Module Input Ports

  • View [View] Connects to the viewer used as an overlay

Module Output Ports

  • Output Object [Renderable] Outputs the input view as a 2D overlay in the viewer.

The add_logo module is used to place a logo or other graphic object in the viewer’s non-transformable overlay. It is extremely easy to use. There are sliders to adjust size and position and a button to select the image file to use as a logo.

Module Input Ports

  • View [View] Connects to the viewer

Module Output Ports

  • Output Object [Renderable] Outputs the logo as a 2D overlay in the viewer.

titles

Titles connects to the red port on the viewer and provides a means to place text in the non-transformable 2D Overlay of the viewer. The text is not transformed by viewer transformations and is positioned using sliders in the Titles user interface.

Module Input Ports

  • Input String [String] Accepts the string to display.
  • Number 1 [Number]: Accepts a number used to construct a the title. (this is effectively a simple version of format_string
  • Number 2 [Number]: Accepts a number used to construct a the title
  • String 1 [String]: Accepts a number used to construct a the title

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer. NOT REQUIRED when the View port is used.

Text Formatting:

Text formatting can be performed with a limited subset of Markdown Syntax.

  • If you need multiple spaces or need to indent with spaces, you must use this instead of a space: ** **

    • 4 spaces in a row would be: ** ** ** ** ** ** ** **
  • **bold** = bold

  • _italics_ = italics

  • Numbered List

      1. First Item
      2. Second Item
      3. Third Item
    • Only works with Left Justified text
  • Bulleted List

      • First Item
      • Second Item
      • Third Item
    • Only works with Left Justified text
  • Monospaced the text to be monospaced is surrounded by *tick* marks

    • Note: This uses the Tick mark which is the character below the tilde "~"
  • Horizontal Rule (line across entire width) ___

    • Note: three underscore characters
  • Colored Text

  • This is the default text, but<font color="#FF0000">these words are red.</font>

  • Font Size

  • Some big text in the middle

Font Change

  • Some larger Monospaced Font text in the middle.
  • <h?> … </h?> Heading (?= 1 for largest to 6 for smallest, eg h1)
  • ** … ** Bold Text
    • … * Italic Text
  • Underline Text
  • Strikeout
  • Superscript - Smaller text placed below normal text
  • Subscript - Smaller text placed below normal text
  • Small - Fineprint size text

3d titles

3d titles connects to the red port on the viewer and provides a means to place text in 3D space of your model. The text is transformed by viewer transformations and is positioned using X, Y & Z sliders in the Titles user interface.

Module Input Ports

  • Input String [String] Accepts the string to display.

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer. NOT REQUIRED when the View port is used.

place_text

place_text replaces both Text3D and MultiText3D and provides a means to interactively place 2D and 3D renderable text strings or to read a .PT File (or legacy .EMT file) to place the text.

Module Input Ports

  • View [View] This is the primary Purple port which connects to the viewer to receive the extent of all objects in the viewer AND outputs the test.
  • This port can be used as your only connection from place_text to the viewer and no other connections are needed.
  • Minor Ports not needed for most all cases
    • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
    • Explode [Number] Accepts the Explode distance from other modules

Module Output Ports

  • Output For Transform [Renderable] Provides an additional output port if you want to duplicate place_text’s output via a transform_group module.
  • Minor Ports not needed for most all cases
    • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
    • Explode [Number] Outputs the Explode distance to other modules

interactive_labels

The interactive_labels module allows the user to place formatted labels at probed locations within the viewer. The data displayed is the data at the probed location

Module Input Ports

  • Z Scale [Number / minor] Accepts Z Scale (vertical exaggeration) from other modules
  • Number Variable [Number / minor] Accepts a number to be used in the expression
  • Input String Variable [String / minor] Accepts a string to be used in the expression
  • View [View / minor] Connects to the viewer to allow probing on all objects.

Module Output Ports

  • Z Scale [Number / minor] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Number Variable [Number / minor] Outputs a number to be used in the expression
  • Output String Variable [String / minor] Outputs a string to be used in the expression
  • Output Object [Renderable] Outputs to the viewer.

format_string

format_string allows you to construct a complex string (for use in titles or as file names) using multiple string and numeric inputs. An expression determines the content of the output.

The Expression is treated as Python f-string which allows for the use of the variables with Python expressions.

Module Input Ports

  • Date [Number] Accepts a date
  • Number 1 [Number] Accepts a number
  • Number 2 [Number] Accepts a number
  • Number 3 [Number] Accepts a number
  • Number 4 [Number] Accepts a number
  • String 1 [String] An input string
  • String 2 [String] An input string
  • String 3 [String] An input string

Module Output Ports

  • Output String [String] The resultant string output

Note: Strings cannot be formatted or subsetted

The available floating point presentation types are:

  • ’e’ - Exponent notation. Prints the number in scientific notation using the letter ’e’ to indicate the exponent.
  • ‘E’ - Exponent notation. Same as ’e’ except it converts the ’e+XX’ to uppercase ‘E+XX’ .
  • ‘f’ - Fixed point. Displays the number as a fixed-point number.
  • ‘g’ - General format. For a given precision p >= 1, this rounds the number to p significant digits and then formats the result in either fixed-point format or in scientific notation, depending on its magnitude.
    • The precise rules are as follows: suppose that the result formatted with presentation type ’e’ and precision p-1 would have exponent exp. Then if -4 <= exp < p, the number is formatted with presentation type ‘f’ and precision p-1-exp. Otherwise, the number is formatted with presentation type ’e’ and precision p-1. In both cases insignificant trailing zeros are removed from the significant, and the decimal point is also removed if there are no remaining digits following it.

      • Positive and negative infinity, positive and negative zero, and nans, are formatted as inf, -inf, 0, -0 and nan respectively, regardless of the precision.
      • A precision of 0 is treated as equivalent to a precision of 1.
      • The default precision is 6.
  • ‘G’ - General format. Same as ‘g’ except switches to ‘E’ if the number gets to large.
  • ’n’ - Number. This is the same as ‘g’, except that it uses the current locale setting to insert the appropriate number separator characters.
  • ‘%’ - Percentage. Multiplies the number by 100 and displays in fixed (‘f’) format, followed by a percent sign.
  • ’’ (None) - similar to ‘g’, except that it prints at least one digit after the decimal point.

The following are example formats and the resultant output:

  • N1 = 3.141592654 | Expression set to {N1:.4f} | Result is 3.1416
  • N1 = 12345.6789 | Expression set to {N1:.6e} | Result is 1.234568e+04
  • N1 = 123456789.0123 | Expression set to {N1:.6G} | Result is 1.23457E+08
  • N1 = 123456789.0123 | Expression set to {N1:.6g} | Result is 1.23457e+08
  • N1 = 123456.0123 | Expression set to {N1:.6G} | Result is 123456
  • N1 = 123456.0123 | Expression set to {N1:.9G} | Result is 123456.012
  • N1 = 123456.0123 | Expression set to {N1:.5f} | Result is 123456.01230
  • N1 = 0.893 | Expression set to {N1:.2%} | Result is 89.30%
  • N1 = 3.141592654 | Expression set to {N1} | Result is 3.141592654

f-string examples:

  • N1 = 3.06 | S1 = “TOTHC Above 3.060 mg/kg

    • Expression set to {S1.split()[0]} above {N1*1000:,.0f} ug/kg
    • Result is TOTHC above 3,060 ug/kg

REFERENCE: https://www.python.org/dev/peps/pep-3101/

F-STRING REFERENCE: https://www.python.org/dev/peps/pep-0498/

SyntaxDescriptionExampleNotes
Weekday as locale’s abbreviated name.Sun, Mon, …, Sat (en_US);
So, Mo, …, Sa (de_DE)(1)
Weekday as locale’s full name.Sunday, Monday, …, Saturday (en_US);
Sonntag, Montag, …, Samstag (de_DE)(1)
Weekday as a decimal number, where 0 is Sunday and 6 is Saturday.0, 1, …, 6-
Day of the month as a zero-padded decimal number.01, 02, …, 31-
Month as locale’s abbreviated name.Jan, Feb, …, Dec (en_US);
Jan, Feb, …, Dez (de_DE)(1)
Month as locale’s full name.January, February, …, December (en_US);
Januar, Februar, …, Dezember (de_DE)(1)
Month as a zero-padded decimal number.01, 02, …, 12-
Year without century as a zero-padded decimal number.00, 01, …, 99-
Year with century as a decimal number.0001, 0002, …, 2013, 2014, …, 9998, 9999(2)
Hour (24-hour clock) as a zero-padded decimal number.00, 01, …, 23-
Hour (12-hour clock) as a zero-padded decimal number.01, 02, …, 12-
Locale’s equivalent of either AM or PM.AM, PM (en_US);
am, pm (de_DE)(1), (3)
Minute as a zero-padded decimal number.00, 01, …, 59-
Second as a zero-padded decimal number.00, 01, …, 59(4)
Microsecond as a decimal number, zero-padded on the left.000000, 000001, …, 999999(5)
UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive).(empty), +0000, -0400, +1030(6)
Time zone name (empty string if the object is naive).(empty), UTC, EST, CST-
Day of the year as a zero-padded decimal number.001, 002, …, 366-
Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0.00, 01, …, 53(7)
Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0.00, 01, …, 53(7)
Locale’s appropriate date and time representation.Tue Aug 16 21:30:00 1988 (en_US);
Di 16 Aug 21:30:00 1988 (de_DE)(1)
Locale’s appropriate date representation.08/16/88 (None);
08/16/1988 (en_US);
16.08.1988 (de_DE)(1)
Locale’s appropriate time representation.21:30:00 (en_US);
21:30:00 (de_DE)(1)
A literal character.%-

Notes:

  1. Because the format depends on the current locale, care should be taken when making assumptions about the output value. Field orderings will vary (for example, “month/day/year” versus “day/month/year”), and the output may contain Unicode characters encoded using the locale’s default encoding (for example, if the current locale is , the default encoding could be any one of , , or ; use to determine the current locale’s encoding).

  2. The method can parse years in the full [1, 9999] range, but years < 1000 must be zero-filled to 4-digit width.

    Changed in version 3.2: In previous versions, method was restricted to years >= 1900.

    Changed in version 3.3: In version 3.2, method was restricted to years >= 1000.

  3. When used with the method, the directive only affects the output hour field if the directive is used to parse the hour.

  4. Unlike the module, the module does not support leap seconds.

  5. When used with the method, the directive accepts from one to six digits and zero pads on the right. is an extension to the set of format characters in the C standard (but implemented separately in datetime objects, and therefore always available).

  6. For a naive object, the and format codes are replaced by empty strings.

    For an aware object:

    • is transformed into a 5-character string of the form +HHMM or -HHMM, where HH is a 2-digit string giving the number of UTC offset hours, and MM is a 2-digit string giving the number of UTC offset minutes. For example, if returns , is replaced with the string .
    • If returns , is replaced by an empty string. Otherwise is replaced by the returned value, which must be a string.

    Changed in version 3.2: When the directive is provided to the method, an aware object will be produced. The of the result will be set to a instance.

  1. When used with the method, and are only used in calculations when the day of the week and the year are specified.

Copyright © 2001-2014 Python Software Foundation; All Rights Reserved

  • external faces

    external_faces The external_faces module extracts external faces from a 2D or 3D field for rendering. external_faces produces a mesh of only the external faces of each cell set of a data set. Because each cell set’s external faces are created there may be faces that are seemingly internal (vs. external). This is especially true when external faces is used subsequent to a plume module on 3D (volumetric) input.

  • external edges

    external_edges The external_edges module produces a wireframe representation of of an unstructured cell data mesh. This is generally used to visualize the skeletal shape of the data domain while viewing output from other modules, such as plumes and surfaces, inside the unstructured mesh. external_edges produces a mesh of only the external edges which meet the edge angle criteria below for each cell set of a data set. Because each cell set’s external faces are used there may be edges that are seemingly internal (vs. external). This is especially true when external edges is used subsequent to a plume module on 3D (volumetric) input.

  • cross section

    cross section cross section creates a fence diagram along a user defined (x, y) path. The fence cross-section has no thickness (because it is composed of areal elements such as triangles and quadrilaterals), but can be created in either true 3D model space or projected to 2D space. It receives a 3D field (with volumetric elements) into its left input port and it receives lines or polylines (from draw_lines, polyline processing, import_cad, isolines, import vector gis, or other sources) into its right input port. Its function is similar to buffer distance, however it actually creates a new grid and does not rely on any other modules (e.g. plume or plume_shell) to do the “cutting”. Only the x and y coordinates of the input (poly)lines are used because cross section cuts a projected slice that is z invariant. cross section recalculates when either input field is changed (and Run Automatically is on) or when the “Run Once” button is pressed.

  • slice

    slice The slice module allows you to create a subset of your input which is of reduced dimensionality. This means that volumetric, surface and line inputs will result in surface, line and point outputs respectively. This is unlike cut which preserves dimensionality. The slice module is used to slice through an input field using a slicing plane defined by one of four methods

  • isolines

    isolines The isolines module is used to produce lines of constant (iso)value on a 2D surface (such as a slice plane), or the external faces of a 3D surface, such as the external faces of a plume. The input data for isolines must be a surface (faces), it cannot be a volumetric data field. If the input is the faces of a 3D surface, then the isolines will actually be 3D in nature. Isolines can automatically place labels in the 2D or 3D isolines. By default isolines are on the surface (within it) and they have an elevated jitter level (1.0) to make them preferentially visible. However they can be offset to either side of the surface.

  • cut

    pcut The cut module allows you to create a subset of your input which is of the same dimensionality. This means that volumetric, surface, line and point inputs will have subsetted outputs of the same object type. This is unlike slice which decreases dimensionality. The cut module is used to cut away part of the input field using a cutting plane defined by one of four methods

  • plume

    plume The plume module creates a (same dimensionality) subset of the input, regardless of dimensionality. What this means, in other words, is that plume can receive a field (blue port) model with cells which are points, lines, surfaces and/or volumes and its output will be a subset of the same type of cells. This module should not normally be used when you desire a visualization of a 3D volumetric plume but rather when you wish to do subsequent operations such as analysis, slices, etc.

  • intersection

    intersection intersection is a powerful module that incorporates some of the characteristics of plume, yet allows for any number of volumetric sequential (serial) subsetting operations. The functionality of the intersection module can be obtained by creating a network of serial plume modules. The number of analytes in the intersection is equal to the number of plume modules required.

  • union

    union union is a powerful module that automatically performs for a large number of complex serial and parallel subsetting operations required to compute and visualize the union of multiple analytes and threshold levels. The functionality of the union module can be obtained by creating a network fragment composed of only plume modules. However as the number of analytes in the union increases, the number of plume modules increases very dramatically. The table below lists the number of plume modules required for several cases:

  • subset by expression

    subset by expression The subset by expression module creates a subset of the input grid with the same dimensionality. What this means, in other words, is that plume can receive a field (blue port) model with cells which are points, lines, surfaces and/or volumes and its output will be a subset of the same type of cells.

  • footprint

    footprint The footprint module is used to create the 2D footprint of a plume_shell. It creates a surface at the specified Z Position with an x-y extent that matches the 3D input. The footprint output does not contain data, but data can be mapped onto it with external kriging. NOTE: Do not use adaptive gridding when creating the 3D grid to be footprinted and mapping the maximum values with krig_2d (as in the example shown below). Footprint will produce the correct area, but krig_2d will map anomalous results when used with 3d estimation’s adaptive gridding.

  • slope aspect splitter

    slope_aspect_splitter The slope_aspect_splitter module will split an input field into two output fields based upon the slope and/or aspect of the external face of the cell and the subset expression used. The input field is split into two fields one for which all cells orientations are true for the subset expression, and another field for which all cells orientations are false for the subset expression.

  • crop and downsize

    crop_and_downsize The crop_and_downsize module is used to subset an image, or structured 1D, 2D or 3D mesh (an EVS “field” data type with implicit connectivity). Similar to cropping and resizing a photograph, crop_and_downsize sets ranges of cells in the I, J and K directions which create a subset of the data. When used on an image (which only has two dimensions), crop removes pixels along any of the four edges of the image. Additionally, crop_and_downsize reduces the resolution of the image or grid by an integer downsize value. If the resolution divided by this factor yields a remainder, these cells are dropped.

  • select cell sets

    select cell sets select cell sets provides the ability to select individual stratigraphic layers, lithologic materials or other cell sets for output. If connected to explode_and_scale multiple select cell sets modules will allow selection of specific cell sets for downstream processing. One example would be to texture map the top layer with an aerial photo after one select cell sets and to color the other layers by data with a parallel select cell sets path. This can be accomplished by multiple explode_and_scale modules, but that would be much less efficient.

  • orthoslice

    orthoslice The orthoslice module is similar to the slice module, except limited to only displaying slice positions north-south (vertical), east-west (vertical) and horizontal. orthoslice subsets a structured field by extracting one slice plane and can only be orthogonal to the X, Y, or Z axis. Although less flexible in terms of capability, orthoslice is computationally more efficient.

  • edges

    edges The edges module is similar to the External_Edges module in that it produces a wireframe representation of the nodal data making up an unstructured cell data mesh. There is however, no adjustment of edge angle and therefore only allows viewing of all grid boundaries (internal AND external) of the input mesh. The edges module is useful in that it is able to render lines around adaptive gridding locations whereas external_edges does NOT render lines around this portion of the grid.

  • bounds

    bounds bounds generates lines and/or surfaces that indicate the bounding box of a 3D structured field. This is useful when you need to see the shape of an object and the structure of its mesh. This module is similar to external_edges (set to edge angle = 60), except, bounds allows for placing faces on the bounds of a model.

Subsections of Subsetting

external_faces

The external_faces module extracts external faces from a 2D or 3D field for rendering. external_faces produces a mesh of only the external faces of each cell set of a data set. Because each cell set’s external faces are created there may be faces that are seemingly internal (vs. external). This is especially true when external faces is used subsequent to a plume module on 3D (volumetric) input.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Object [Renderable]: Outputs to the viewer.

external_edges

The external_edges module produces a wireframe representation of of an unstructured cell data mesh. This is generally used to visualize the skeletal shape of the data domain while viewing output from other modules, such as plumes and surfaces, inside the unstructured mesh. external_edges produces a mesh of only the external edges which meet the edge angle criteria below for each cell set of a data set. Because each cell set’s external faces are used there may be edges that are seemingly internal (vs. external). This is especially true when external edges is used subsequent to a plume module on 3D (volumetric) input.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a data field from 3d estimation or other similar modules.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the subsetted field as edges
  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the Z scaling and edge angle used to determine what edges should be displayed
  • Data Selection: controls the type and specific data to be output or displayed

cross section

cross section creates a fence diagram along a user defined (x, y) path. The fence cross-section has no thickness (because it is composed of areal elements such as triangles and quadrilaterals), but can be created in either true 3D model space or projected to 2D space.

It receives a 3D field (with volumetric elements) into its left input port and it receives lines or polylines (from draw_lines, polyline processing, import_cad, isolines, import vector gis, or other sources) into its right input port. Its function is similar to buffer distance, however it actually creates a new grid and does not rely on any other modules (e.g. plume or plume_shell) to do the “cutting”. Only the x and y coordinates of the input (poly)lines are used because cross section cuts a projected slice that is z invariant. cross section recalculates when either input field is changed (and Run Automatically is on) or when the “Run Once” button is pressed.

If you select the option to “Straighten to 2D”, cross section creates a straightened fence that is projected to a new 2D coordinate system of your choice. The choices are XZ or XY. For output to ESRI’s ArcMAP, XY is required.

NOTE: The beginning of straightened (2D) fences is defined by the order of the points in the incoming line/polyline. This is done to provide the user with complete control over how the cross-section is created. However, if you are provided a CAD file and you do not know the order of the line points, you can export the CAD file using the write_lines module which provides a simple text file that will make it easy to see the order of the points.

Module Input Ports

  • Input Field [Field] Accepts a volumetric data field.
  • Input Line [Field] Accepts a field with one or more line segments for the creation of the fence cross-section. Only the XY coordinates are used. Data is not used.

Module Output Ports

  • Output Field [Field] Outputs the field
  • Output Object [Renderable]: Outputs to the viewer.

slice

The slice module allows you to create a subset of your input which is of reduced dimensionality. This means that volumetric, surface and line inputs will result in surface, line and point outputs respectively. This is unlike cut which preserves dimensionality.

The slice module is used to slice through an input field using a slicing plane defined by one of four methods

    1. A vertical plane defined by an X or Easting coordinate

    2. A vertical plane defined by a Y or Northing coordinate

    3. A Horizontal plane defined by a Z coordinate

    4. An arbitrarily positioned Rotatable plane which requires:

      1. A 3D point through which the slicing plane passes. This point can be displayed using the Reference Spherewhose size, visibility and transparency can be controlled. Please note that the same slicing result can be achieved with an infinite number of 3D points, all of which would be on the same slicing plane.
      2. A Dip direction
      3. A Strike direction
Info
  • The slice module may be controlled with the driven sequence module.
  • Only the orthogonal slice methods (Easting, Northing and Horizontal) may be used with driven sequence.

Module Input Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the field
  • Output Object [Renderable]: Outputs to the viewer.

isolines

The isolines module is used to produce lines of constant (iso)value on a 2D surface (such as a slice plane), or the external faces of a 3D surface, such as the external faces of a plume. The input data for isolines must be a surface (faces), it cannot be a volumetric data field. If the input is the faces of a 3D surface, then the isolines will actually be 3D in nature. Isolines can automatically place labels in the 2D or 3D isolines. By default isolines are on the surface (within it) and they have an elevated jitter level (1.0) to make them preferentially visible. However they can be offset to either side of the surface.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Contour Levels [Contours]: Accepts an array of values representing values to place isolines

Module Output Ports

  • Output Field [Field] Outputs the field with altered data min/max values
  • Output Contour Levels [Contours]: Outputs an array of values representing values to be labeled in the legend.
  • Output Object [Renderable]: Outputs to the viewer.

pcut

The cut module allows you to create a subset of your input which is of the same dimensionality. This means that volumetric, surface, line and point inputs will have subsetted outputs of the same object type. This is unlike slice which decreases dimensionality.

The cut module is used to cut away part of the input field using a cutting plane defined by one of four methods

The cut module cuts through an input field using a slicing plane defined by one of four methods

    1. A vertical plane defined by an X or Easting coordinate

    2. A vertical plane defined by a Y or Northing coordinate

    3. A Horizontal plane defined by a Z coordinate

    4. An arbitrarily positioned Rotatable plane which requires:

      1. A 3D point through which the slicing plane passes. This point can be displayed using the Reference Spherewhose size, visibility and transparency can be controlled. Please note that the same slicing result can be achieved with an infinite number of 3D points, all of which would be on the same slicing plane.
      2. A Dip direction
      3. A Strike direction
Info
  • The cut module may be controlled with the driven sequence module.
  • Only the orthogonal cut methods (Easting, Northing and Horizontal) may be used with driven sequence.

The cutting plane essentially cuts the data field into two parts and sends only the part above or below the plane to the output ports (above and below are terms which are defined by the normal vector of the cutting plane). The output of cut is the subset of the model from the side of the cut plane specified.

Module Input Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Cut Field [Field] Outputs the field with “cut” data to later use for subsetting
  • Output Field [Field] Outputs the subsetted field
  • Output Object [Renderable]: Outputs to the viewer.

plume

The plume module creates a (same dimensionality) subset of the input, regardless of dimensionality. What this means, in other words, is that plume can receive a field (blue port) model with cells which are points, lines, surfaces and/or volumes and its output will be a subset of the same type of cells.

This module should not normally be used when you desire a visualization of a 3D volumetric plume but rather when you wish to do subsequent operations such as analysis, slices, etc.

Info

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Isolevel [Number] Accepts the subsetting level.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as a volume.
  • Status [String / minor] Outputs a string containing a description of the operation being performed (e.g. TCE plume above 4.00 mg/kg)
  • Isolevel [Number] Outputs the subsetting level.
  • Plume [Renderable]: Outputs to the viewer.

intersection

intersection is a powerful module that incorporates some of the characteristics of plume, yet allows for any number of volumetric sequential (serial) subsetting operations.

The functionality of the intersection module can be obtained by creating a network of serial plume modules. The number of analytes in the intersection is equal to the number of plume modules required.

The intersection of multiple analytes and threshold levels can be equated to the answer to the following question (example assumes three analytes A, B & C with respective subsetting levels of a, b and c):

“What is the volume within my model where A is above a, AND B is above b, AND C is above c?”

image\\boolean.jpg image\\boolean.jpg

The figure above is a Boolean representation of 3 analyte plumes (A, B & C). The intersection of all three is the black center portion of the figure. Think of the image boundaries as the complete extents of your models (grid). The “A” plume is the circle colored cyan and includes the green, black and blue areas. The intersection of just A & C would be both the green and black portions.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field
  • Output Object [Renderable]: Outputs to the viewer.

union

union is a powerful module that automatically performs for a large number of complex serial and parallel subsetting operations required to compute and visualize the union of multiple analytes and threshold levels. The functionality of the union module can be obtained by creating a network fragment composed of only plume modules. However as the number of analytes in the union increases, the number of plume modules increases very dramatically. The table below lists the number of plume modules required for several cases:

Number of AnalytesNumber of plume Modules
23
36
410
515
621
728
n(n * (n+1)) / 2

From the above table, it should be evident that as the number of analytes in the union increases, the computation time will increase dramatically. Even though union appears to be a single module, internally it grows more complex as the number of analytes increases.

The union of multiple analytes and threshold levels can be equated to the answer to the following question (example assumes three analytes A, B & C with respective subsetting levels of a, b and c):

“What is the volume within my model where A is above a, OR B is above b, OR C is above c?”

image\\boolean.jpg image\\boolean.jpg

The figure above is a Boolean representation of 3 analyte plumes (A, B & C). The union of all three is the entire colored portion of the figure. Think of the image boundaries as the complete extents of your models (grid). The “A” plume is the circle colored cyan and includes the green, black and blue areas. The union of just A & C would be all colored regions EXCEPT the magenta portion of B.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field
  • Output Object [Renderable]: Outputs to the viewer.

subset by expression

The subset by expression module creates a subset of the input grid with the same dimensionality. What this means, in other words, is that plume can receive a field (blue port) model with cells which are points, lines, surfaces and/or volumes and its output will be a subset of the same type of cells.

subset by expression is different from plume in that it outputs entire cells making its output lego-like.

It uses a mathematical expression allowing you to do complex subsetting calculations on coordinates and MULTIPLE data components with a single module, which can dramatically simplify your network and reduce memory usage. It has 2 floating point variables (N1,N2) which are setup with ports so they can be easily animated.

Subset By: You can specify whether the subsetting is based on either Nodal data or Cell data.

Expression Cells to Include: Is a Python expression where you can specify whether the subsetting of cells requires all nodes to match the criteria for a cell to be included or if ANY nodes match, then the cell will be included. The second option includes more cells.

Operators:

  • == Equal to
  • < Less than
  • Greater Than

  • <= Less than or Equal to
  • = Greater Than or Equal to

  • or
  • and
  • in (as in list)

Example Expressions:

  • If Nodal data is selected:
    • D0 >= N1 All nodes with the first analyte greater than or equal to N1 will be used for inclusion determination.
    • (D0 < N1) or (D1 < N2) All nodes with the first analyte less than or equal to N1 OR the second analyte less than or equal to N2 will be used for inclusion determination.
  • If Cell data is selected:
    • D1 in [0, 2] where D1 is Layer will give you the uppermost and third layers.
    • D1 in [1] where D1 is Layer will give you the middle layer.
    • D1 == 0 where D1 is Layer will give you the uppermost layer
    • D1 >= 1 where D1 is Layer will give you all but the uppermost layer

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as a volume.
  • Status [String / minor] Outputs a string containing a description of the operation being performed (e.g. TCE plume above 4.00 mg/kg)
  • Isolevel [Number] Outputs the subsetting level.
  • Plume [Renderable]: Outputs to the viewer.

footprint

The footprint module is used to create the 2D footprint of a plume_shell. It creates a surface at the specified Z Position with an x-y extent that matches the 3D input. The footprint output does not contain data, but data can be mapped onto it with external kriging.

NOTE: Do not use adaptive gridding when creating the 3D grid to be footprinted and mapping the maximum values with krig_2d (as in the example shown below). Footprint will produce the correct area, but krig_2d will map anomalous results when used with 3d estimation’s adaptive gridding.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field.
  • Output Object [Renderable]: Outputs to the viewer.

NOTE: Creating a 2D footprint with the maximum data within the plume volume mapped to each x-y location requires the external data and external gridding options in krig_2d. A typical network and output is shown below.

slope_aspect_splitter

The slope_aspect_splitter module will split an input field into two output fields based upon the slope and/or aspect of the external face of the cell and the subset expression used. The input field is split into two fields one for which all cells orientations are true for the subset expression, and another field for which all cells orientations are false for the subset expression.

All data from the original input is preserved in the output.

Flat Surface Aspect: If you have a flat surface then a realistic aspect can not be generated. This field lets you set the value for those sells.

  1. To output all upward facing surfaces: use the default subset expression of SLOPE < 89.9. If your object was a perfect sphere, this would give you most of the upper hemisphere. Since the equator would be at slope of 90 degrees and the bottom would >90 degrees.

(Notice there is potential for rounding errors use 89.9 instead of 90)

Note: If your ground surface is perfectly flat and you wanted only it, you could use SLOPE < 0.01, however in the real world where topography exists, it can be difficult if not impossible to extract the ground surface and not get some other bits of surfaces that also meet your criteria.

  1. General expression (assuming a standard cubic building)

A) SLOPE > 0.01 (Removes the top of the building)

B) SLOPE > 0.01 and SLOPE < 179.9 (Removes the top and bottom of the building)

  1. Since ASPECT is a variable it must be defined for each cell. In cells with a slope of 0 or 180 there would be no aspect without our defining it with the flat surface aspect field

  2. Units are always degrees. You could change them to radians if you want inside the expression. (SLOPE * PI/180)

Module Input Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Input Field [Field] Accepts a data field.
  • Number Variable 1 [Number] Accepts the first numeric value for the slope or aspect expression
  • Number Variable 2 [Number] Accepts the second numeric value for the slope or aspect expression

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output True Field [Field] Outputs the field which matches the subsetting expression
  • Output False Field [Field] Outputs the opposite of the true field

crop_and_downsize

The crop_and_downsize module is used to subset an image, or structured 1D, 2D or 3D mesh (an EVS “field” data type with implicit connectivity). Similar to cropping and resizing a photograph, crop_and_downsize sets ranges of cells in the I, J and K directions which create a subset of the data. When used on an image (which only has two dimensions), crop removes pixels along any of the four edges of the image. Additionally, crop_and_downsize reduces the resolution of the image or grid by an integer downsize value. If the resolution divided by this factor yields a remainder, these cells are dropped.

crop_and_downsize refers to I, J, and K dimensions instead of x-y-z. This is done because grids are not required to be parallel to the coordinate axes, nor must the grid rows, columns and layers correspond to x, y, or z. You may have to experiment with this module to determine which coordinate axes or model faces are being cropped or downsized.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field
  • Output Object [Renderable]: Outputs to the viewer.

select cell sets

select cell sets provides the ability to select individual stratigraphic layers, lithologic materials or other cell sets for output. If connected to explode_and_scale multiple select cell sets modules will allow selection of specific cell sets for downstream processing. One example would be to texture map the top layer with an aerial photo after one select cell sets and to color the other layers by data with a parallel select cell sets path. This can be accomplished by multiple explode_and_scale modules, but that would be much less efficient.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field
  • Output Object [Renderable]: Outputs to the viewer.

orthoslice

The orthoslice module is similar to the slice module, except limited to only displaying slice positions north-south (vertical), east-west (vertical) and horizontal. orthoslice subsets a structured field by extracting one slice plane and can only be orthogonal to the X, Y, or Z axis. Although less flexible in terms of capability, orthoslice is computationally more efficient.

The axis selector chooses which axis (I, J, K) the orthoslice is perpendicular to. The default is I. If the field is 1D or 2D, three values are still displayed. Select the values meaningful for the input data.

The plane slider selects which plane to extract from the input. This is similar to the position slider in slice but, since the input is a field, the selection is based on the nodal dimensions of the axis of interest. Therefore, the range is 0 to the maximum nodal dimension of the axis. For example, for an orthoslice through a grid with dimension 20 x 20 x 10, the range in the x and y directions would be 0 to 20.

edges

The edges module is similar to the External_Edges module in that it produces a wireframe representation of the nodal data making up an unstructured cell data mesh. There is however, no adjustment of edge angle and therefore only allows viewing of all grid boundaries (internal AND external) of the input mesh. The edges module is useful in that it is able to render lines around adaptive gridding locations whereas external_edges does NOT render lines around this portion of the grid.

bounds

bounds generates lines and/or surfaces that indicate the bounding box of a 3D structured field. This is useful when you need to see the shape of an object and the structure of its mesh. This module is similar to external_edges (set to edge angle = 60), except, bounds allows for placing faces on the bounds of a model.

bounds has one input ports. Data passed to the first port (closest to the left) must contain any type of structured mesh (a grid definable with IJK resolution and no separable layers). Node_Data can be present, but is only used if you switch on Data.

  • distance to 2d area

    distance to 2d area distance to 2d area receives any 3D field into its left input port and it receives triangulated polygons (from triangulate_polygon, or other sources) into its right input port. Its function is similar to buffer distance or distance to shape. It adds a data component to the input 3D field and using plume_shell, you can cut structures inside or outside of the input polygons. Only the x and y coordinates of the polygons are used because distance to 2d area cuts a projected slice that is z invariant. distance to 2d area recalculates when either input field is changed or the “Accept” button is pressed.

  • distance to surface

    distance to surface distance to surface receives any 3D field into its left input port and it receives a surface (from create_tin, surface from horizons, slice, etc.) into its right input port. Its function is similar to distance to shape. It adds a data component to the input 3D field referencing the cutting surface. With this new data component you can use a subsettting module like plume to pass either side of the 3D field as defined by the cutting surface, thereby allowing cutting of structures along any surface. The surface can originate from a TIN surface, a slice plane or a geologic surface. The cutting surface can be multi-valued in Z, which means the surface can have instances where there are more one z value for a single x, y coordinate. This might occur with a wavy fault surface that is nearly vertical, or a fault surface with recumbent folds.

  • distance to shape

    distance to shape distance to shape receives any 3D field into its input port and outputs the same field with an additional data component. Using plume_shell, you can cut structures with either a cylinder or rotated rectangle. The cutting action is z invariant (like a cookie cutter). Depending on the resolution of the input field, rectangles may not have sharp corners. With rectilinear fields (and non-rotated rectangles), the threshold module can replace plume_shell to produce sharp corners (by removing whole cells). plume can be used to output 3D fields for additional filtering or mapping.

  • buffer distance

    buffer distance buffer distance receives any 3D field into its left input port and it receives polylines (from read_lines, import vector gis, import_cad, isolines, or other sources) into its right input port. Its function is similar to distance to shape. It adds a data component to the input 3D field and using plume_shell, you can cut structures along the path of the input polylines. Only the x and y coordinates of the polylines are used because buffer distance creates data to cut a projected region that is z invariant. buffer distance recalculates when either input field is changed or the “Execute” button is pressed. “Thick Fences” can be produced with the output of this module.

  • distance to tunnel center

    distance to tunnel center The distance to tunnel center module is similar to the distance to surface module in that it receives any 3D field into its left input port, BUT instead of a surface, it receives a line (along the trajectory of a tunnel, boring or mineshaft) into its right input port. The distance to tunnel center module then cuts a cylinder, of user defined radius, along the line trajectory. The algorithm is identical in concept to distance to surface in that it adds a data component to the input 3D field referencing the distance from the line (trajectory). With this new data component you can use a subsetting module like plume_volume to pass either portion of the 3D field (inside the cylinder or outside the cylinder), thereby allowing cutting tunnels along any trajectory. The trajectory line can originate from any one of a number of sources such read_lines, import cad or import vector gis.

  • overburden

    overburden The overburden module computes the complete volume required to excavate a plume or ore body given the pit wall slope (measured from vertical) and the excavation digging accuracy (we refer to as buffer size). overburden receives any 3D field into its input port and outputs the same field with an additional data component. Its function is similar to distance to shape, but instead involves computing a new data component based on the nodal values in the 3D field and two user defined parameter values called Wall Slope and buffer size (addressing excavation accuracy). The data component is subset according to a concentration input (based on the subsetting level you want excavated). For example, once overburden has been run for GOLD at a 45 degree pit wall slope, the user would select 45-deg:overburden_GOLD and subset all data below 1 ppm to render a 45 degree slope pit which would excavate everything higher than 1 ppm concentration. A volumetrics calculation could be made on these criteria which would encompass the excavation and the ore body above 1 ppm.

Subsections of Proximity

distance to 2d area

distance to 2d area receives any 3D field into its left input port and it receives triangulated polygons (from triangulate_polygon, or other sources) into its right input port. Its function is similar to buffer distance or distance to shape. It adds a data component to the input 3D field and using plume_shell, you can cut structures inside or outside of the input polygons. Only the x and y coordinates of the polygons are used because distance to 2d area cuts a projected slice that is z invariant. distance to 2d area recalculates when either input field is changed or the “Accept” button is pressed.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Area [Field] Accepts a field with the area to include/exclude

Module Output Ports

  • Output Field [Field] Outputs the field with area data to allow subsetting

The first thing to know, is that distance to 2d area does not cut.

It provides data with which you can then subset using other modules like plume or intersection.

Without the subsetting modules AFTER distance to 2d area, you would see no affect of having distance to 2d area in your application other than it adds a new nodal data component called distance to 2d area (or whatever you’ve renamed your module to be).

distance to 2d area needs a SURFACE as its input. It does not care where that surface comes from and it certainly does not need to be from a DWG file. The surface can be complex, meaning that it can have holes in it, or it can be separate disjoint pieces of surface(s).

If you’re starting with lines, it is required that the lines form a closed polyline. It is not enough that the lines appear to be a closed path, they must be truly closed, with each successive segment precisely connected to the last and next. CAD files are often poorly drawn and are not closed (though they can be well drawn and properly closed also).

Our draw_lines module can certainly be used to create a Closed polyline, but you must make sure to turn on the “Closed” toggle for each line segment to ensure it is closed.

Once you have one or more closed polylines, you will need to pass those through triangulate_polylines modules to create a TIN surface from the closed polylines. You should confirm (by connecting it to the viewer) that you are getting the correct surface before proceeding to distance to 2d area. If triangulate_polylines will not run, your lines are not closed.

Once you have your surface(s) and you pass that to the right input port of distance to 2d area, the output of distance to 2d area is data with which you can subset your original model. The data is zero (0.0) at the boundaries of your surface: is less than zero (negative) inside the surface; and is greater than zero (positive) outside of the surface. To get everything inside, you need to choose “Below Level” in the subsetting modules rather than the Default “Above Level”.

distance to surface

distance to surface receives any 3D field into its left input port and it receives a surface (from create_tin, surface from horizons, slice, etc.) into its right input port. Its function is similar to distance to shape. It adds a data component to the input 3D field referencing the cutting surface. With this new data component you can use a subsettting module like plume to pass either side of the 3D field as defined by the cutting surface, thereby allowing cutting of structures along any surface. The surface can originate from a TIN surface, a slice plane or a geologic surface. The cutting surface can be multi-valued in Z, which means the surface can have instances where there are more one z value for a single x, y coordinate. This might occur with a wavy fault surface that is nearly vertical, or a fault surface with recumbent folds.

distance to surface recalculates when either input field is changed or the “Accept” button is pressed.

The general approach with distance to surface is:

  • Create a cutting surface representing either a fault plane, a scouring surface (unconformity), or an excavation.
  • Create a 3D model of the object you wish to cut.
  • Pass the 3D model into the left port of distance to surface, and the cutting surface to the right port of distance to surface and hit accept.

Module Input Ports

  • Input 3D Field [Field] Accepts a data field.
  • Input Surface [Field] Accepts a field with the surface to cut the input volume/surface

Module Output Ports

  • Output Field [Field] Outputs the field with distance to surface data to allow subsetting

distance to shape

distance to shape receives any 3D field into its input port and outputs the same field with an additional data component. Using plume_shell, you can cut structures with either a cylinder or rotated rectangle. The cutting action is z invariant (like a cookie cutter). Depending on the resolution of the input field, rectangles may not have sharp corners. With rectilinear fields (and non-rotated rectangles), the threshold module can replace plume_shell to produce sharp corners (by removing whole cells). plume can be used to output 3D fields for additional filtering or mapping.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the field with data to allow subsetting.

buffer distance

buffer distance receives any 3D field into its left input port and it receives polylines (from read_lines, import vector gis, import_cad, isolines, or other sources) into its right input port. Its function is similar to distance to shape. It adds a data component to the input 3D field and using plume_shell, you can cut structures along the path of the input polylines. Only the x and y coordinates of the polylines are used because buffer distance creates data to cut a projected region that is z invariant. buffer distance recalculates when either input field is changed or the “Execute” button is pressed. “Thick Fences” can be produced with the output of this module.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Fence LIne [Field] Accepts a field with the line(s) to cut the input volume/surface

Module Output Ports

  • Output Field [Field] Outputs the field with distance to path(s) data to allow subsetting

distance to tunnel center

The distance to tunnel center module is similar to the distance to surface module in that it receives any 3D field into its left input port, BUT instead of a surface, it receives a line (along the trajectory of a tunnel, boring or mineshaft) into its right input port. The distance to tunnel center module then cuts a cylinder, of user defined radius, along the line trajectory. The algorithm is identical in concept to distance to surface in that it adds a data component to the input 3D field referencing the distance from the line (trajectory). With this new data component you can use a subsetting module like plume_volume to pass either portion of the 3D field (inside the cylinder or outside the cylinder), thereby allowing cutting tunnels along any trajectory. The trajectory line can originate from any one of a number of sources such read_lines, import cad or import vector gis.

The general approach is to subset the distance to tunnel center data component with either constant_shellor plume_volume. The choice of 1.0 for the subsetting level will result in cutting AT the user radius, while less than 1.0 is inside the cylinder wall and greater than 1.0 is outside the cylinder wall.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Tunnel Line [Field] Accepts a field with the surface to cut the input volume/surface

Module Output Ports

  • Output Field [Field] Outputs the field with distance to tunnel line data to allow subsetting

overburden

The overburden module computes the complete volume required to excavate a plume or ore body given the pit wall slope (measured from vertical) and the excavation digging accuracy (we refer to as buffer size).

overburden receives any 3D field into its input port and outputs the same field with an additional data component. Its function is similar to distance to shape, but instead involves computing a new data component based on the nodal values in the 3D field and two user defined parameter values called Wall Slope and buffer size (addressing excavation accuracy). The data component is subset according to a concentration input (based on the subsetting level you want excavated). For example, once overburden has been run for GOLD at a 45 degree pit wall slope, the user would select 45-deg:overburden_GOLD and subset all data below 1 ppm to render a 45 degree slope pit which would excavate everything higher than 1 ppm concentration. A volumetrics calculation could be made on these criteria which would encompass the excavation and the ore body above 1 ppm.

NOTES:

  • It is much safer and more understandable to work at Z Scale = 1. Otherwise, the apparent angle of your pit will be very different than the input angle

    • As the Z Scale increases, the angle of pit sidewalls looks more vertical, since the tangent of the apparent angle is the tangent of the actual angle multiplied times the Z Scale.
  • The overburden module must be placed before any scaling modules (such as explode_and_scale) to ensure an accurate slope angle during computations and subsequent visualizations.

  • The grid resolution and resulting cell aspect ratios are very important.

    • You cannot see any pit wall slope differences if those differences create a slope which is less than one cell wide from the bottom of the pit to the top.
    • Therefore, very high resolutions in X-Y are needed for large sites with shallow pits. Expect long run times for overburden.

Note on angles: Angles are defined from the vertical and are specified in degrees.

  • A vertical wall pit is created with an angle of Zero (0.0) degrees
  • A 2:1 pitch slope from horizontal would be an angle whose arctangent = 2.0. This is 63.4 degree from horizontal and therefore you would enter 26.6 degrees (from vertical)

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the enhanced field with overburden data

Create Buffer Around Plume - This toggle determines if the overburden computations are rigorous and determine the buffer on all side of the plume (ore body). If this is off, the module runs much quicker.

Buffer Size - An accuracy level resulting in the amount of excavation outside the subsetting level of interest. For example, a type-in of 10.0 would result in 10 feet of over-excavation from the subsetting level of interest.

Overburden creates a data component name that includes the wall slope, module name (including #1 or #2 if there are more than one copy in your application), and original data component (analyte) name. (i.e. 30-deg:overburden#1 of Benzene)

The overburden data component may be subset by modules such as plume, isosurface, plume_shell, etc.

  • node computation

    node_computation The node_computation module is used to perform mathematical operations on nodal data fields and coordinates. Data values can be used to affect coordinates (x, y, or z) and coordinates can be used to affect data values. Up to two fields can be input to node_computation. Mathematical expressions can involve one or both of the input fields**. Fields must be identical grids. This means they must have the same number of nodes and cells, otherwise the results will not make sense.**

  • cell computation

    cell_computation The cell_computation module is used to perform mathematical operations on cell data in fields. Unlike node_compuation, it cannot affect coordinates. Though data values can’t be used to affect coordinates (x, y, or z), the cell center (average of nodes) coordinates can be used to affect data values. Up to two fields can be input to cell_computation. Mathematical expressions can involve one or both of the input fields.

  • combine nodal_data

    combine_nodal_data The combine_nodal_data module is used to create a new set of nodal data components by selecting components from up to six separate input data fields. The mesh (x-y-z coordinates) from the first input field, will be the mesh in the output. The input fields should have the same scale and origin, and number of nodes in order for the output data to have any meaning. This module is useful for combining data contained in multiple field ports or files, or from different Kriging modules.

  • interpolate data

    interpolate data The interpolate data module interpolates nodal and/or cell data from a 3D or 2D field to either a 2D mesh or 1D line. Typical uses of this module are mapping of data from a 3D mesh onto a geologic surface or a 2D fence section. In these applications the 2D surface(s) simply provide the new geometry (mesh) onto which the adjacent nodal values are interpolated. The primary requirement is that the data be equal or higher dimensionality than the mesh to be interpolated onto. For instance, if the user has a 2D surface with nodal data (perhaps z values), then a 1D line may be input and the nearest nodal values from the 2D surface will be interpolated onto it.

  • compute thickness

    The compute thickness module allows you to compute the thickness of complex plumes or cell sets such as lithologic modeling's materials.

  • translate by data

    translate by data The translate by data module accepts nearly any mesh and translates the grid in x, y, or z based upon either a nodal or cell data component or a constant. The interface enables changing the Scale Factor for z translates to accommodate an overall z exaggeration in your applications. This module is most useful when used with the import vector gis module to properly place polygonal shapefile cells at the proper elevation.

  • cell data to node data

    cell data to node data The cell data to node data module is used to translate cell data components to nodal data components. Cell data components are data components which are associated with cells rather than nodes. Most modules in EVS that deal with analytical or continuum data support node based data. Therefore, cell data to node data can be used to translate cell based data to a nodal data structure consistent with other EVS modules.

  • node data to cell data

    The node data to cell data module is used to translate nodal data components to cell data components. Cell data components are data components which

  • shrink cells

    shrink cells The shrink cells module produces a mesh containing disjoint cells which can be optionally shrunk relative to their geometric centers. It creates duplicate nodes for all cells that share the same node, making them disjoint. If the shrink cells toggle is set, the module computes new coordinates for the nodes based on the specified shrink factor (which specifies the scale relative to the geometric centers of each cell). The shrink factor can vary from 0 to 1. A value of 0 produces non-shrunk cells; 1 produces completely collapsed cells (points). This module is useful for separate viewing of cells comprising a mesh.

  • cell centers

    cell centers cell centers module produces a mesh containing Point cell set, each point of which represents a geometrical center of a corresponding cell in the input mesh. The coordinates of cell centers are calculated by averaging coordinates of all the nodes of a cell. The number of nodes in the output mesh is equal to number of cells in the input mesh. If the input mesh contains Cell_Data it becomes a Node_Data in the output mesh with each node values equal to corresponding cell value. Nodal data is not output directly. You can use this module to create a position mesh for the glyphs at nodes module. You may also use this module as mesh input to the interpolate data module, then send the same nodal values as the input grid, to create interpolated nodal values at cell centroids.

  • connectivity assessment

    This module allows you to assign data and subset all (or selected) discrete (disconnected) regions of plumes or lithologic materials.

Subsections of Processing

node_computation

The node_computation module is used to perform mathematical operations on nodal data fields and coordinates. Data values can be used to affect coordinates (x, y, or z) and coordinates can be used to affect data values.

Up to two fields can be input to node_computation. Mathematical expressions can involve one or both of the input fields**. Fields must be identical grids. This means they must have the same number of nodes and cells, otherwise the results will not make sense.**

Nodal data input to each of the ports is normally scalar, however if a vector data component is used, the values in the expression are automatically the magnitude of the vector (which is a scalar). If you want a particular component of a vector, insert an extract_scalar module before connecting a vector data component to node_computation. The output is always a scalar. If a data field contains more than one data component, you may select from any of them.

Module Input Ports

  • Input Field 1[Field] Accepts a data field.
  • Input Field 2[Field / minor] Accepts a data field.
  • Input Value N1 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N2 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N3 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N4 [Number / minor] Accepts a number to be used in the field computations.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Value N1 [Number / minor] Outputs a number used in the field computations.
  • Output Value N2 [Number / minor] Outputs a number used in the field computations.
  • Output Value N3 [Number / minor] Outputs a number used in the field computations.
  • Output Value N4 [Number / minor] Outputs a number used in the field computations.
  • Output Object [Renderable]: Outputs to the viewer.

Module Parameters

  • Data Definitions: You can have more than one new data component computed from each pass of node_computation. By default there is only Data0.
    • Add/Remove buttons allow you to add or remove Data Definitions
  • Name: The data component name (e.g. Total Hydrocarbons)
  • Units : The units of the data component (e.g. mg/kg)
  • Log Process: When your input data is log processed, the values within node_computation will always be exponentiated.
    • In other words, even when your data is log processed, you will always see actual (not log) values.
    • This toggle should be ON whenever you are dealing with Log data.
    • If you want to perform math operations on the “Log” data, you must take the log of the An* or Bn* values within node_computation
    • If you do take the log of those values, you should always exponentiate the end results before exiting node_computation.

Each nodal data component from Input Field 1 is assigned as a variable to be used in the script. For example:

  • An0 : First input data component
  • An1 : Second input data component
  • An2 : Third input data component
  • An^N^ : Nth input data component

The min and max of these components are also added as variables :

  • Min_An0 : Minimum of An0 data
  • Max_An0 : Maximum of An0 data
  • Min_An* : Minimum of An* data

For Input Field 2 the variable names change to:

  • Bn0 : First input data component
  • Bn1 : Second input data component
  • Bn2 : Third input data component
  • Bn^N^ : Nth input data component

An interesting and simple example of using node_computation can be found here.

The equation(s) used to modify data and/or coordinates must be input as part of a Python Script. The module will generate a default script and by modifying only one line (for the X coordinate)we get:

which with the following application:

Gives us the ability to view densely sampled data as line plots beside each boring

cell_computation

The cell_computation module is used to perform mathematical operations on cell data in fields. Unlike node_compuation, it cannot affect coordinates.

Though data values can’t be used to affect coordinates (x, y, or z), the cell center (average of nodes) coordinates can be used to affect data values.

Up to two fields can be input to cell_computation. Mathematical expressions can involve one or both of the input fields.

Cell data input to each of the ports is scalar.

If a data field contains more than one data component, you may select from any of them.

Module Input Ports

  • Input Field 1[Field] Accepts a data field.
  • Input Field 2[Field / minor] Accepts a data field.
  • Input Value N1 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N2 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N3 [Number / minor] Accepts a number to be used in the field computations.
  • Input Value N4 [Number / minor] Accepts a number to be used in the field computations.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Value N1 [Number / minor] Outputs a number used in the field computations.
  • Output Value N2 [Number / minor] Outputs a number used in the field computations.
  • Output Value N3 [Number / minor] Outputs a number used in the field computations.
  • Output Value N4 [Number / minor] Outputs a number used in the field computations.
  • Output Object [Renderable]: Outputs to the viewer.

Each cell data component from Input Field 1 is assigned as a variable to be used in the script. For example:

  • An0 : First input data component
  • An1 : Second input data component
  • An2 : Third input data component
  • An* : Nth input data component

The min and max of these components are also added as variables :

  • Min_An0 : Minimum of An0 data
  • Max_An0 : Maximum of An0 data
  • Min_An* : Minimum of An* data

For Input Field 2 the variable names change to:

  • Bn0 : First input data component
  • Bn1 : Second input data component
  • Bn2 : Third input data component
  • Bn* : Nth input data component

combine_nodal_data

The combine_nodal_data module is used to create a new set of nodal data components by selecting components from up to six separate input data fields. The mesh (x-y-z coordinates) from the first input field, will be the mesh in the output. The input fields should have the same scale and origin, and number of nodes in order for the output data to have any meaning. This module is useful for combining data contained in multiple field ports or files, or from different Kriging modules.

Module Input Ports

  • Model Field [Field] Accepts a field with data whose grid will be exported.
  • Input Field 1 [Field] Accepts a data field.
  • Input Field 2 [Field] Accepts a data field.
  • Input Field 3 [Field] Accepts a data field.
  • Input Field 4 [Field] Accepts a data field.
  • Input Field 5 [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the field with selected data
  • Output Object [Renderable]: Outputs to the viewer.

interpolate data

The interpolate data module interpolates nodal and/or cell data from a 3D or 2D field to either a 2D mesh or 1D line. Typical uses of this module are mapping of data from a 3D mesh onto a geologic surface or a 2D fence section. In these applications the 2D surface(s) simply provide the new geometry (mesh) onto which the adjacent nodal values are interpolated. The primary requirement is that the data be equal or higher dimensionality than the mesh to be interpolated onto. For instance, if the user has a 2D surface with nodal data (perhaps z values), then a 1D line may be input and the nearest nodal values from the 2D surface will be interpolated onto it.

NOTE: This module supplants interpolate nodal data and interpolate cell data.

Module Input Ports

  • Input Data Field [Field] Accepts a data field.
  • Input Destination Field [Field] Accepts a field onto which the data will be interpolated

Module Output Ports

  • Output Field [Field] Outputs the field Destination Field with new data
  • Output Object [Renderable]: Outputs to the viewer.

The compute thickness module allows you to compute the thickness of complex plumes or cell sets such as lithologic modeling’s materials.

Module Input Ports

  • Input Field The field to map thickness data onto
  • Input Volume The (volumetric) field to determine thickness data from

Module Output Ports

  • Output Field The surface (or 3D object) with mapped thickness data

Important Features and Considerations

The right input port must have a 3D field as input.

  • There is no concept of thickness associated with 2D or 3D surfaces
  • Volumetric inputs can be plume_shell or intersection_shell objects which are hollow.
    • Thickness will be determined based upon the apparent thickness of the plume elements.
    • When 3D Shells are input, they must be closed objects.

Determining thickness of arbitrary volumetric objects is a very computationally intensive operation. You can use this module to compute thickness in two primary ways:

  • Compute the thickness distribution of a 3D object and project that onto a 2D surface (generally at the ground surface)
    • A surface (such as from geologic surface) would connect to the first (left) input port
    • The volumetric object connects to the second (right) input port
  • Compute the thickness distribution of a 3D object and project that onto the same object
    • The volumetric object connects to the first (left) input port
    • The same volumetric object connects to the second (right) input port

Note: In all cases run times can be long. Coarser grids and the first option will run faster, but the complexity and resolution of the volumetric object will be the major factor in the computation time.

translate by data

The translate by data module accepts nearly any mesh and translates the grid in x, y, or z based upon either a nodal or cell data component or a constant.

The interface enables changing the Scale Factor for z translates to accommodate an overall z exaggeration in your applications. This module is most useful when used with the import vector gis module to properly place polygonal shapefile cells at the proper elevation.

Warning: The scale factor is always applied. If translating along any axis other than z, it is unlikely that you want to use the Z Exaggeration factor used elsewhere in your application.

  • When translating by a Constant, the amount is affected by the Z Scale Factor.
  • When translating by Cell Data, a radio box appears to allow specification of the cell data component
  • When translating by Node Data, a radio box appears to allow specification of the nodal data component

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a data field from 3d estimation or other similar modules.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the subsetted field
  • Scale Link
  • Output Object [Renderable]: Outputs to the viewer

cell data to node data

The cell data to node data module is used to translate cell data components to nodal data components. Cell data components are data components which are associated with cells rather than nodes. Most modules in EVS that deal with analytical or continuum data support node based data. Therefore, cell data to node data can be used to translate cell based data to a nodal data structure consistent with other EVS modules.

Module Input Ports

  • Input Field [Field] Accepts a field with cell data.

Module Output Ports

  • Output Field [Field / Minor] Outputs the field with cell data converted to nodal data
  • Output Object [Renderable]: Outputs to the viewer.

The node data to cell data module is used to translate nodal data components to cell data components. Cell data components are data components which are associated with cells rather than nodes. Most modules in EVS that deal with analytical or continuum data support node based data, and those that deal with geology (lithology) tend to use cell data. Therefore, node data to cell data can be used to translate nodal data to cell data.

Module Input Ports

  • Input Field [Field] Accepts a field with nodal data.

Module Output Ports

  • Output Field [Field / Minor] Outputs the field with nodal data converted to cell data
  • Output Object [Renderable]: Outputs to the viewer.

shrink cells

The shrink cells module produces a mesh containing disjoint cells which can be optionally shrunk relative to their geometric centers. It creates duplicate nodes for all cells that share the same node, making them disjoint. If the shrink cells toggle is set, the module computes new coordinates for the nodes based on the specified shrink factor (which specifies the scale relative to the geometric centers of each cell). The shrink factor can vary from 0 to 1. A value of 0 produces non-shrunk cells; 1 produces completely collapsed cells (points). This module is useful for separate viewing of cells comprising a mesh.

Module Input Ports

  • Input Field [Field] Accepts a field

Module Output Ports

  • Output Field [Field / Minor] Outputs the field with modified cells
  • Output Object [Renderable]: Outputs to the viewer

cell centers

cell centers module produces a mesh containing Point cell set, each point of which represents a geometrical center of a corresponding cell in the input mesh. The coordinates of cell centers are calculated by averaging coordinates of all the nodes of a cell. The number of nodes in the output mesh is equal to number of cells in the input mesh. If the input mesh contains Cell_Data it becomes a Node_Data in the output mesh with each node values equal to corresponding cell value. Nodal data is not output directly. You can use this module to create a position mesh for the glyphs at nodes module. You may also use this module as mesh input to the interpolate data module, then send the same nodal values as the input grid, to create interpolated nodal values at cell centroids.

Module Input Ports

  • Input Field [Field] Accepts a field.

Module Output Ports

  • Output Field [Field / Minor] Outputs the field as points representing the centers of the cells.
  • Output Object [Renderable]: Outputs to the viewer.

This module allows you to assign data and subset all (or selected) discrete (disconnected) regions of plumes or lithologic materials.

  • OVERVIEW: When we create subsets of models, either based upon analytical data, stratigraphic or lithologic modeling these subsets often exist as several disjoint pieces.

    • In the case of analytical (e.g., contaminant) plumes, the number and size of regions (pieces) can strongly depend on the subsetting level.
    • With lithologic models, the number and size of the regions depends on the complexity of the lithologic data and the modeling parameters.
  • FUNCTION: The connectivity assessment module assigns a new data component to these dis-connected regions.

    • The pieces are sorted based upon the number of cells in each piece.

      • This is generally well correlated to the volume of that regions, but it is definitely possible that the region with the most cells may not have the greatest volume.
    • The highest cell count region is assigned to 0 (zero) and regions with descending cell counts are assigned higher integer values.

  • PARAMETERS:

    1. Merge Cell Sets (toggle): Merges cell sets such as stratigraphic layers or lithologic materials. Generally should be on when dealing with analytical data.

    2. Assessment Mode: Determines the criteria for subsetting of regions and/or assigning data

      1. Add Region ID Data: Does not subset, but assigns Cell Data corresponding to cell counts
      2. Subset By Region ID(s)
      3. Region Closest to Point
      4. Region with most cells: Outputs Region ID = 0 without assigning data.
    3. Point Coordinate: Is the X, Y, Z coordinate to be used for “Closest region”

    4. Region IDs: The list of regions to include in the output if Selection Mode is set to “Subset By Region ID”

  • read evs field

    read evs field read evs field reads a dataset from the primary and legacy file formats created by write evs field. .EF2: The only Lossless format for models created in 2024 and later versions .eff ASCII format, best if you want to be able to open the file in an editor or print it. For a description of the .EFF file formats click here. .efz GNU Zip compressed ASCII, same as .eff but in a zip archive .efb binary compressed format, the smallest & fastest format due to its binary form Output Quality: An important feature of read evs field is the ability to specify two separate files which correspond to High Quality (e.g. fine grids) and Low Quality (e.g. coarse grids a.k.a. fast).

  • import vtk

    import vtk import vtk reads a dataset from any of the following 9 VTK file formats. Please note that VTK’s file formats do not include coordinate units information, not analyte units. There is a parameter which allows you to specify coordinate units (meters are the default). vtk: legacy format vtr: Rectilinear grids vtp: Polygons (surfaces) vts: Structured grids vtu: Unstructured grids pvtp: Partitioned Polygons (surfaces) pvtr: Partitioned Rectilinear grids pvts: Partitioned Structured grids pvtu: Partitioned Unstructured grids Module Output Ports

  • import cad

    import cad General Module Function The import cad module will read the following versions of CAD files: AutoCAD DWG and DXF files through AutoCAD 2021 (version 24.0) Bentley Microstation DGN files through Version 8. This module provides the user with the capability to integrate site plans, buildings, and other 2D or 3D features into the EVS visualization, to provide a frame of reference for understanding the three dimensional relationships between the site features, and characteristics of geologic, hydrologic, and chemical features. The drawing entities are treated as three dimensional objects, which provides the user with a lot of flexibility in the placement of CAD objects in relation to EVS objects in the visualization. The project onto surface and geologic_surfmap modules allow the user to drape CAD line-type entities (not 3D-Faces) onto three dimensional surfaces.

  • import vector gis

    import vector gis The import vector gis module reads the following vector file formats: ESRI Shapefile (.shp); Arc/Info E00 (ASCII) Coverage (.e00); Atlas BNA file (.bna); GeoConcept text export (.gxt); GMT ASCII Vectors (.gmt); and the MapInfo TAB (.tab) format. Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules Module Output Ports Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules Output [Field] Outputs the GIS data. Output Object [Renderable]: Outputs to the viewer Properties and Parameters

  • import raster as horizon

    import raster as horizon The import raster as horizon module reads several different raster format files in EVS Geology format. These formats include DEMs, Surfer grid files, Mr. Sid files, ADF files, etc.. Multiple import raster as horizon modules can be combined with combine horizons into a 3D geologic model. Alternatively, a single file can be displayed as a surface (with surfaces from horizons) or you can export its coordinates (with export nodes) to use the values in a GMF file.

  • buildings

    buildings The buildings module reads C Tech’s .BLDG file and creates various 3D objects (boxes, cylinders, wedge-shapes for roofs, simple houses etc.), and provides a means for scaling the objects and/or placing the objects at user specified locations. The objects are displayed based on x, y & z coordinates supplied by the user in a .bldg file, with additional scaling option controls on the buildings user interface.

  • read lines

    read_lines The read_lines module is used to visualize a series of points with data connected by lines. read_lines accepts three different file formats, with the APDV file format the lines are connected by boring ID, with the ELF (EVS Line File) format each line is made by defining the points that make up the line, and with the SAD (Strike and Dip) file format, there is a choice to connect each sample by ID or by Data Value.

  • read strike and dip

    read strike and dip General Module Function The read strike and dip module is used to visualize sampled locations. It places a disk, oriented by strike and dip, at each sample location. Each disk is probable and can be colored by a picked color, by Id, or by data value. If an ID is present, such as a boring ID, then there is an option to place tubes between connected disks, or those disks with similar Id’s.

  • read glyph

    read glyph read glyph replaces the Glyphs sub-library that was in the tools library. It reads glyphs saved in any of the three primary EVS field file formats and allows you to modify the shape and orientation of the glyph to allow it to be used in various modules that emply glyphs in slightly different ways. These include glyphs at nodes, place_glyph,drive_glyphs, advector, post_samples, etc. Most modules EXCEPT post_samples will use the glyphs without chaning the default alignment. The supported file formats are:

  • import geometry

    General Module Function

Subsections of Import

read evs field

read evs field reads a dataset from the primary and legacy file formats created by write evs field.

  • .EF2: The only Lossless format for models created in 2024 and later versions
  • .eff ASCII format, best if you want to be able to open the file in an editor or print it. For a description of the .EFF file formats click here.
  • .efz GNU Zip compressed ASCII, same as .eff but in a zip archive
  • .efb binary compressed format, the smallest & fastest format due to its binary form

Output Quality: An important feature of read evs field is the ability to specify two separate files which correspond to High Quality (e.g. fine grids) and Low Quality (e.g. coarse grids a.k.a. fast).

You can see that read evs field is specifying two different EFB files. The Output Quality is set to Highest Quality and is Linked (black circle). The viewer shows:

If we change the Output Quality on the Home Tab

It changes the setting in read evs field and the viewer changes to show:

Though you “can” change the Output Quality in read evs field, it is best to change it on the Home Tab to make sure that all read evs field modules in your application will have the same setting. This is not relevant to this simple application, but if we were using a cutting surface (saved as fine and coarse EFBs) and doing distance to surface operations on a very large grid, this synchronization would be important.

read evs field effectively has explode_and_scale and an external_faces module built in. This allows the module to perform:

  • Z Scaling
  • Exploding
  • Nodal or Cell data selection
  • Selection of cell_sets

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output [Field] Outputs the saved field.
  • File Notes [String / minor] Outputs a string to document the settings used to create the field.
  • Output Object [Renderable]: Outputs to the viewer.
  • EFF File

    EVS Field File Formats and Examples EVS Field file formats supplant the need for UCD, netCDF, Field (.fld), EVS_Geology by incorporating all of their functionality and more in a new file format with three mode options. .eff ASCII format, best if you want to be able to open the file in an editor or print it

Subsections of read evs field

EVS Field File Formats and Examples

EVS Field file formats supplant the need for UCD, netCDF, Field (.fld), EVS_Geology by incorporating all of their functionality and more in a new file format with three mode options.

  1. .eff ASCII format, best if you want to be able to open the file in an editor or print it

  2. .efz GNU Zip compressed ASCII, same as .eff but in a zip archive

  3. .efb binary compressed format, the smallest & fastest format due to its binary form

Here are the tags available in an EVS field file, in the appropriate order. Note that no file will contain ALL these tags, as some are specific to the type of field (based on definition). The binary file format is undocumented and exclusively used by C Tech’s write evs field module.

If the file is written compressed, the .efz file (and any split, extra data files) will all be compressed. The compression algorithm is compatible with the free gzip/gunzip programs or WinZip, so the user can uncompress a .efz file and get an .eff file at will. The .efb file is also compressed (hence its very small size), but uncompressing this file will not make it human-readable.

EVS Field Files

EVS Field Files consist of file tags that delineate the various sections of the file(s) and data (coordinates, nodal and/or cell data, and connectivity). The file tags are discussed below followed by portions of a few example files.

FILE TAGS:

The file tags for the ASCII file formats (shown in Bold Italics) are discussed below with a representative example. They are given in the appropriate order. If you need assistance creating software to write these file formats, please contact support@ctech.com.

DATE_CREATED(optional) 7/16/2004 1:57:55 PM

The creation date of the file.

EVS_FIELD_FILE_NOTES_START (optional)

Insert your Field file notes here.

EVS_FIELD_FILE_NOTES_END

This is the file description block. These notes are used to describe the contents of the Field file. The entire block is optional, however if you wish to use notes then both the starting and end tag are required.

DEFINITION Mesh+Node_Data

This is the type of field we are creating. Typically options are:

  1. Mesh+Node_Data

  2. Mesh+Cell_Data

  3. Mesh+Node_Data+Cell_Data

  4. Mesh_Struct+Node_Data (Geology)

  5. Mesh_Unif+Node_Data (Uniform field)

NSPACE 3

nspace of the output field. Typically 3, but 2 in the case of geology or an image

NNODES 66355

Number of nodes. Not used for Mesh_Struct of Mesh_Unif

NDIM 2

Number of dimensions in a Mesh_Struct or Mesh_Unif

DIMS 41 41

The dimensions for a mesh_struct or uniform field

POINTS 11061.528999 12692.304504 -44.049999 11611.330994 13098.105469 11.500000

The lower left and upper right corner of a uniform field (Mesh_Unif only)

COORD_UNITS “ft”

Coordinate Units

NUM_NODE_DATA 7

Number of nodal data components

NUM_CELL_DATA 1

Number of cell data components

NCELL_SETS 5

Number of cell sets

NODES FILE “test_split.xyz” ROW 1 X 1 Y 2 Z 3

Nodes section is starting. If it says “NODES IN_FILE”, the nodes follow (x/y/z) on the next nnodes rows, otherwise, the line will say FILE “filename” ROW 1 X 1 Y 2 Z 3, which is the file to get the coordinates, the row to start at (1 is first line of file), and the columns containing your X, Y, and Z values

NODE_DATA_DEF 0 “TOTHC” “log_ppm” MINMAX -3 4.592 FILE “test_split.nd” ROW 1 COLS 1

NODE_DATA_DEF specifies the definition of a nodal data component. The second word is the data component number, the third is the name, the 4th is the units, then it will either say IN_FILE (which means that it will start after a NODE_DATA_START tag) or the file information. Other options are:

  1. MINMAX - two numbers follow which are the data minimum and maximum. This behaves much like the set_min_max module.

  2. If this is vector data, there will be a VECLEN 3 tag in there, and COLS will need to have 3 numbers following it (for each component of the vector)

  3. NODE_DATA_START. All the node data components that are specified IN_FILE are listed in order after this tag.

CELL_SET_DEF 0 8120 Hex “Fill” MINMAX 1 14 FILE “test_split.conn” ROW 1

Definition of a cell set. 2nd word is cell set number, 3rd is number of cells, 4th is type, 5th is the name, then its either IN_FILE (which means they will be listed in order by cell set), or the FILE “filename” section and a row to begin reading from. Other options are:

  1. MINMAX - two numbers follow which are the data minimum and maximum. This behaves much like the cell_set_min_max module.

  2. CELL_START. Start of all the cell set definitions that are specified IN_FILE.

CELL_DATA_DEF 0 “Indicator” “Discreet Unit” FILE “test_split.cd” ROW 1 COLS 1

Definition of cell data. Same options as NODE_DATA_DEF

CELL_DATA_START

Start of all cell data that is specified as IN_FILE

LAYER_NAMES “Top” “Fill” “Silt” “Clay” “Gravel” “Sand”

Allows you to specify the names associated with surfaces (layers)

MATERIAL_MAPPING “1|Silt” “2|Fill” “3|Clay” “4|Sand” “5|Gravel”

Allows you to specify the Material_ID and the associated material names. Note that each number/name pair is in quotes, with the name separated from the number by the pipe “|” symbol.

END

Marks the end of the data section of the file. (Allows us to put a password on .eff files)

EVS Field File Examples:

Because EVS Field Files can contain so many different types of grids, it is beyond the scope of our help system to include every variant.

3d estimation - EFF file representing a uniform field: The file below is an abbreviated example of writing the output of 3d estimation having kriged a uniform field (which can be volume rendered). Large sections of the data regions of this file are omitted to save space. This is represented by sections of the file with “*** omitted ***” replacing many lines of data.

DEFINITION Mesh_Unif+Node_Data

NSPACE 3

NDIM 3

DIMS 41 41 35

COORD_UNITS “ft”

NUM_NODE_DATA 7

POINTS 11281.910004 12211.149994 -29.900000 12515.890015 13259.449951 0.900000

NODE_DATA_DEF 0 “VOC” “log_ppm” IN_FILE

NODE_DATA_DEF 1 “Confidence-VOC” “linear_%” IN_FILE

NODE_DATA_DEF 2 “Uncertainty-VOC” “linear_Unc” IN_FILE

NODE_DATA_DEF 3 “Geo_Layer” “linear_” IN_FILE

NODE_DATA_DEF 4 “Elevation” “linear_ft” IN_FILE

NODE_DATA_DEF 5 “Layer Thickness” “linear_ft” IN_FILE

NODE_DATA_DEF 6 “Material_ID” “linear_” IN_FILE

NODE_DATA_START

-2.357487 34.455845 2.325005 0.000000 -29.900000 30.799999 0.000000

-3.000000 34.977974 0.000000 0.000000 -29.900000 30.799999 0.000000

-3.000000 35.603794 0.000000 0.000000 -29.900000 30.799999 0.000000

***** OMITTED *****

-3.000000 30.056839 0.000000 0.000000 0.900000 30.799999 0.000000

-3.000000 29.858747 0.000000 0.000000 0.900000 30.799999 0.000000

-3.000000 29.673925 0.000000 0.000000 0.900000 30.799999 0.000000

END

3d estimation - EFF Split file representing a uniform field: The file below is a complete example of writing the output of 3d estimation having kriged a uniform field (which can be volume rendered). Note that the .EFF file is quite small, but references the data in a separate file named krig_3d_uniform_split.nd.

DEFINITION Mesh_Unif+Node_Data

NSPACE 3

NDIM 3

DIMS 41 41 35

COORD_UNITS “ft”

NUM_NODE_DATA 7

POINTS 11281.910004 12211.149994 -29.900000 12515.890015 13259.449951 0.900000

NODE_DATA_DEF 0 “VOC” “log_ppm” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 1

NODE_DATA_DEF 1 “Confidence-VOC” “linear_%” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 2

NODE_DATA_DEF 2 “Uncertainty-VOC” “linear_Unc” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 3

NODE_DATA_DEF 3 “Geo_Layer” “linear_” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 4

NODE_DATA_DEF 4 “Elevation” “linear_ft” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 5

NODE_DATA_DEF 5 “Layer Thickness” “linear_ft” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 6

NODE_DATA_DEF 6 “Material_ID” “linear_” FILE “krig_3d_uniform_split.nd” ROW 1 COLS 7

END

Large sections of the data regions of the data file krig_3d_uniform_split.nd are omitted below to save space. This is represented by sections of the file with “*** omitted ***” replacing many lines of data.

-2.357487 34.455845 2.325005 0.000000 -29.900000 30.799999 0.000000

-3.000000 34.977974 0.000000 0.000000 -29.900000 30.799999 0.000000

-3.000000 35.603794 0.000000 0.000000 -29.900000 30.799999 0.000000

***** OMITTED *****

-3.000000 30.056839 0.000000 0.000000 0.900000 30.799999 0.000000

-3.000000 29.858747 0.000000 0.000000 0.900000 30.799999 0.000000

-3.000000 29.673925 0.000000 0.000000 0.900000 30.799999 0.000000

gridding and horizons & 3d estimation - EFF file representing multiple geologic layers with analyte (e.g. chemistry): The file below is an abbreviated example of writing the output of 3d estimation having kriged analyte (e.g. chemistry) data with geology input. Large sections of the data regions of this file are omitted to save space. This is represented by sections of the file with “*** omitted ***” replacing many lines of data.

NSPACE 3

NNODES 66355

COORD_UNITS “ft”

NUM_NODE_DATA 7

NCELL_SETS 5

NODES IN_FILE

11153.998856 12722.725708 2.970446

11161.871033 12715.198792 2.783408

11169.743210 12707.671875 2.594242

***** OMITTED *****

11250.848221 12865.266907 -42.575920

11248.750000 12870.909973 -42.000000

11243.389938 12870.020935 -42.474934

NODE_DATA_DEF 0 “TOTHC” “log_mg/kg” IN_FILE

NODE_DATA_DEF 1 “Confidence-TOTHC” “linear_%” IN_FILE

NODE_DATA_DEF 2 “Uncertainty-TOTHC” “linear_Unc” IN_FILE

NODE_DATA_DEF 3 “Geo_Layer” “Linear_” IN_FILE

NODE_DATA_DEF 4 “Elevation” “Linear_ft” IN_FILE

NODE_DATA_DEF 5 “Layer Thickness” “Linear_ft” IN_FILE

NODE_DATA_DEF 6 “Material_ID” “Linear_” IN_FILE

NODE_DATA_START

-0.777059 27.239126 15.861248 0.000000 2.970446 8.270601 2.000000

-0.661227 27.349216 16.503609 0.000000 2.783408 8.270658 2.000000

-0.288564 27.512394 18.822187 0.000000 2.594242 8.261375 2.000000

***** OMITTED *****

2.886921 69.551514 1.128253 4.000000 -42.575920 13.628321 4.000000

3.113943 99.999977 0.000000 4.000000 -42.000000 13.654032 4.000000

3.070153 72.869553 0.841437 4.000000 -42.474934 13.646055 4.000000

CELL_SET_DEF 0 8120 Hex “Fill” IN_FILE

CELL_SET_DEF 1 14680 Hex “Silt” IN_FILE

CELL_SET_DEF 2 6502 Hex “Clay” IN_FILE

CELL_SET_DEF 3 11284 Hex “Gravel” IN_FILE

CELL_SET_DEF 4 14412 Hex “Sand” IN_FILE

CELL_START

0 1 42 41 1681 1682 1723 1722

1 2 43 42 1682 1683 1724 1723

2 3 44 43 1683 1684 1725 1724

***** OMITTED *****

54462 54503 66349 66348 56143 56184 66353 66352

54503 54502 66350 66349 56184 56183 66354 66353

54502 54461 66347 66350 56183 56142 66351 66354

END

Post_samples - EFF file representing spheres: The file below is a complete example of writing the output of post_samples’ blue-black field port having read the file initial_soil_investigation_subsite.apdv. This data file has 99 samples with data that was log processed. If this file is read by read evs field. It creates all 99 spheres colored and sized as they were in Post_samples. The tubes and any labeling are not included in the field port from which this file was created.

DEFINITION Mesh+Node_Data

NSPACE 3

NNODES 99

COORD_UNITS “units”

NUM_NODE_DATA 2

NCELL_SETS 1

NODES IN_FILE

11566.340027 12850.590027 -10.000000

11566.340027 12850.590027 -70.000000

11566.340027 12850.590027 -160.000000

11586.340027 13050.589966 -10.000000

11586.340027 13050.589966 -70.000000

11586.340027 13050.589966 -160.000000

11381.700012 12747.500000 -15.000000

11381.700012 12747.500000 -25.000000

11414.399994 12781.099976 -15.000000

11414.399994 12781.099976 -25.000000

11338.000000 12830.799988 -10.000000

11338.000000 12830.799988 -65.000000

11338.000000 12830.799988 -115.000000

11338.000000 12830.799988 -165.000000

11410.290009 12724.690002 -5.000000

11410.290009 12724.690002 -35.000000

11410.290009 12724.690002 -45.000000

11410.290009 12724.690002 -125.000000

11410.290009 12724.690002 -175.000000

11427.000000 12780.900024 -10.000000

11427.000000 12780.900024 -30.000000

11427.000000 12780.900024 -80.000000

11416.899994 12819.450012 -10.000000

11416.899994 12819.450012 -30.000000

11416.899994 12819.450012 -70.000000

11416.899994 12819.450012 -95.000000

11416.899994 12819.450012 -105.000000

11416.899994 12819.450012 -120.000000

11416.899994 12819.450012 -140.000000

11401.730011 12897.770020 -10.000000

11401.730011 12897.770020 -30.000000

11401.730011 12897.770020 -80.000000

11401.730011 12897.770020 -110.000000

11401.730011 12897.770020 -145.000000

11401.730011 12897.770020 -180.000000

11259.670013 12819.289978 -10.000000

11259.670013 12819.289978 -40.000000

11259.670013 12819.289978 -70.000000

11259.670013 12819.289978 -95.000000

11259.670013 12819.289978 -140.000000

11340.489990 12892.609985 -30.000000

11340.489990 12892.609985 -55.000000

11340.489990 12892.609985 -80.000000

11340.489990 12892.609985 -110.000000

11340.489990 12892.609985 -130.000000

11340.489990 12892.609985 -165.000000

11248.750000 12870.909973 -10.000000

11248.750000 12870.909973 -35.000000

11248.750000 12870.909973 -45.000000

11248.750000 12870.909973 -85.000000

11248.750000 12870.909973 -110.000000

11248.750000 12870.909973 -160.000000

11248.750000 12870.909973 -210.000000

11086.519997 12830.669983 -15.000000

11086.519997 12830.669983 -30.000000

11086.519997 12830.669983 -80.000000

11086.519997 12830.669983 -130.000000

11211.869995 12710.750000 -30.000000

11211.869995 12710.750000 -80.000000

11211.869995 12710.750000 -135.000000

11199.039993 12810.159973 -20.000000

11199.039993 12810.159973 -40.000000

11199.039993 12810.159973 -85.000000

11199.039993 12810.159973 -150.000000

11298.000000 12808.630005 -60.000000

11496.339996 12753.590027 -10.000000

11496.339996 12753.590027 -30.000000

11496.339996 12753.590027 -80.000000

11496.339996 12753.590027 -110.000000

11496.339996 12753.590027 -150.000000

11309.029999 12948.989990 -10.000000

11309.029999 12948.989990 -35.000000

11309.029999 12948.989990 -95.000000

11309.029999 12948.989990 -125.000000

11309.029999 12948.989990 -130.000000

11209.350006 12993.940002 -5.000000

11209.350006 12993.940002 -35.000000

11209.350006 12993.940002 -60.000000

11209.350006 12993.940002 -95.000000

11209.350006 12993.940002 -125.000000

11301.970001 13079.660034 -20.000000

11301.970001 13079.660034 -30.000000

11301.970001 13079.660034 -85.000000

11301.970001 13079.660034 -125.000000

11286.769989 13026.699951 -30.000000

11286.769989 13026.699951 -45.000000

11286.769989 13026.699951 -75.000000

11286.769989 13026.699951 -120.000000

11393.470001 12948.900024 -20.000000

11393.470001 12948.900024 -45.000000

11393.470001 12948.900024 -95.000000

11393.470001 12948.900024 -110.000000

11393.470001 12948.900024 -130.000000

11393.470001 12948.900024 -170.000000

11251.300003 12929.270020 -10.000000

11251.300003 12929.270020 -30.000000

11251.300003 12929.270020 -80.000000

11251.300003 12929.270020 -120.000000

11251.300003 12929.270020 -145.000000

NODE_DATA_DEF 0 “TOTHC” “log_mg/kg” IN_FILE

NODE_DATA_DEF 1 "" "" ID 668 IN_FILE

NODE_DATA_START

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

1.322219 4.998203

2.806180 4.998203

1.602060 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

1.845098 4.998203

2.278754 4.998203

-3.000000 4.998203

1.296665 4.998203

-3.000000 4.998203

1.278754 4.998203

3.716003 4.998203

1.623249 4.998203

1.505150 4.998203

-3.000000 4.998203

1.707570 4.998203

-3.000000 4.998203

3.770852 4.998203

3.869232 4.998203

1.113943 4.998203

-3.000000 4.998203

2.025306 4.998203

3.434569 4.998203

3.594039 4.998203

2.454845 4.998203

-3.000000 4.998203

2.740363 4.998203

2.079181 4.998203

3.806180 4.998203

4.908485 4.998203

2.176091 4.998203

-3.000000 4.998203

3.792392 4.998203

3.362897 4.998203

4.255272 4.998203

3.699387 4.998203

3.518514 4.998203

3.301030 4.998203

3.113943 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

1.361728 4.998203

-3.000000 4.998203

-3.000000 4.998203

2.000000 4.998203

1.643453 4.998203

1.732394 4.998203

1.643453 4.998203

3.556303 4.998203

-0.522879 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

3.079181 4.998203

-3.000000 4.998203

2.633468 4.998203

1.505150 4.998203

-3.000000 4.998203

-3.000000 4.998203

-0.920819 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-0.886057 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-3.000000 4.998203

-0.096910 4.998203

-3.000000 4.998203

4.000000 4.998203

2.000000 4.998203

1.602060 4.998203

1.000000 4.998203

-0.301030 4.998203

-3.000000 4.998203

1.785330 4.998203

-3.000000 4.998203

0.431364 4.998203

4.518514 4.998203

-3.000000 4.998203

CELL_SET_DEF 0 99 Point "" IN_FILE

CELL_START

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

END

import vtk

import vtk reads a dataset from any of the following 9 VTK file formats. Please note that VTK’s file formats do not include coordinate units information, not analyte units. There is a parameter which allows you to specify coordinate units (meters are the default).

  • vtk: legacy format
  • vtr: Rectilinear grids
  • vtp: Polygons (surfaces)
  • vts: Structured grids
  • vtu: Unstructured grids
  • pvtp: Partitioned Polygons (surfaces)
  • pvtr: Partitioned Rectilinear grids
  • pvts: Partitioned Structured grids
  • pvtu: Partitioned Unstructured grids

Module Output Ports

  • Output [Field] Outputs the saved field.
  • Output Object [Renderable]: Outputs to the viewer.

import cad

General Module Function

The import cad module will read the following versions of CAD files:

  • AutoCAD DWG and DXF files through AutoCAD 2021 (version 24.0)
  • Bentley Microstation DGN files through Version 8.

This module provides the user with the capability to integrate site plans, buildings, and other 2D or 3D features into the EVS visualization, to provide a frame of reference for understanding the three dimensional relationships between the site features, and characteristics of geologic, hydrologic, and chemical features. The drawing entities are treated as three dimensional objects, which provides the user with a lot of flexibility in the placement of CAD objects in relation to EVS objects in the visualization. The project onto surface and geologic_surfmap modules allow the user to drape CAD line-type entities (not 3D-Faces) onto three dimensional surfaces.

Virtually all AutoCAD object types are supported including points, lines (of all types), 3D surface objects and 3D volumetric objects.

AutoCAD drawings can be drawn in model space (MSPACE) or paper space (PSPACE). Drawings in paper space have a defined viewport which has coordinates near the origin. When read into EVS this creates objects which are far from your true model coordinates. For this reason, all drawings for use in our software should be in model space.

Module Side Port

  • Z Scale Use the Global Z Scale and avoid using this port in general: [Number] Accepts Z Scale (vertical exaggeration) from other modules.

Module Output Ports

  • Output [Field] Outputs the CAD layers.
  • Output Object [Renderable]: Outputs to the viewer

import vector gis

The import vector gis module reads the following vector file formats: ESRI Shapefile (*.shp); Arc/Info E00 (ASCII) Coverage (*.e00); Atlas BNA file (*.bna); GeoConcept text export (*.gxt); GMT ASCII Vectors (*.gmt); and the MapInfo TAB (*.tab) format.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output [Field] Outputs the GIS data.
  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties controls Z Scale
  • Data Processing: controls clipping, processing (Log) and clamping of input data

import raster as horizon

The import raster as horizon module reads several different raster format files in EVS Geology format. These formats include DEMs, Surfer grid files, Mr. Sid files, ADF files, etc.. Multiple import raster as horizon modules can be combined with combine horizons into a 3D geologic model. Alternatively, a single file can be displayed as a surface (with surfaces from horizons) or you can export its coordinates (with export nodes) to use the values in a GMF file.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Geologic Field [Field / minor] Outputs a 2D grid with data similar in functionality to gridding and horizons

buildings

The buildings module reads C Tech’s .BLDG file and creates various 3D objects (boxes, cylinders, wedge-shapes for roofs, simple houses etc.), and provides a means for scaling the objects and/or placing the objects at user specified locations. The objects are displayed based on x, y & z coordinates supplied by the user in a .bldg file, with additional scaling option controls on the buildings user interface.

Each object is made up of 3D volumetric elements. This allows for the output of buildings to be cut or sliced to reveal a cross section through the buildings.

Selecting the “Edit Buildings” toggle will open an additional section which provides the ability to interactively create 3D buildings in your project.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • View [View] Connects to the viewer to allow interactive building creation.

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output [Field] Outputs the buildings as a field which can be sliced, cut or further subsetted.
  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties controls Z Scale and file input and output
  • Default Building Settings: Defines the default values when a building is interactive created
  • Building Settings: Shows the parameters for all buildings.
  • Sample Buildings File

    Sample Buildings File Below is an example buildings file. Note that the last 4 columns are optional and contain RGB color values (three numbers from zero to 1.0) and/or a building ID number that can be used for coloring. If only color values are supplied (3 numbers) the ID is automatically determined by the row number. If four numbers are provided it is assumed that the last one is the ID. If only one number is provided it is the ID.

Subsections of buildings

Sample Buildings File

Below is an example buildings file. Note that the last 4 columns are optional and contain RGB color values (three numbers from zero to 1.0) and/or a building ID number that can be used for coloring. If only color values are supplied (3 numbers) the ID is automatically determined by the row number. If four numbers are provided it is assumed that the last one is the ID. If only one number is provided it is the ID.

The file below is shown in a table (with dividing lines) for clarity only. The first uncommented line is the number 16 which defines the number of rows of buildings data. The actual file is a simple ASCII file with separators of space, comma and/or tab.

EVS

Copyright (c) 1994-2008 by

C Tech Development Corporation

All Rights Reserved

# This software comprises unpublished confidential information of

# C Tech Development Corporation and may not be used, copied or made

# available to anyone, except in accordance with the license

# under which it is furnished.

C Tech 3D Building file

Building 0 is a unit box with base at z=0.0 centered at origin x,y

Building 1 is a gabled roof for the unit box

# (to make it a house) with base at z=0.0 centered at origin x,y

Building 2 is a wedge roof for the unit box

# (to make it a house) with base at z=0.0 centered at origin x,y

Building 3 is a Equilateral (or Isoseles) Triangular Building 3 side

Building 4 is a Right Triangular Building 3 side

Building 5 is a Hexagonal (6 side) cylinder

Building 6 is a Octagonal (8 side) cylinder

Building 7 is a 16 side cylinder

Building 8 is a 32 side cylinder

Building 9 is a 16 sided horiz. cylindrical tank (Height & Width equal diameter, Length is along x)

Building 10 is a 32 sided horiz. cylindrical tank (Height & Width equal diameter, Length is along x)

Building 11 is a right angle triangle, height only at right angle

Building 12 is a right angle triangle, height at non-right angle

Building 13 is a right angle triangle, height at right angle and 1 non-right angle

Lines beginning with “#” are comments

First uncommented line is number of buildings

X Y Z LengthWidthHeight Angle Bldg_Type Color and/orID

16

0010505020001
010005050303002
0100306050203012
020005050301003
0200305050251023
20000505050034
10010004040201545
20010004040303056
2002000505050067
1002000406020-4578
10000505040089
30000602020-4590.80.60.410
30010005050300100.40.60.411
030005050500111.00.40.412
10030005050500120.41.00.413
20030005050500130.40.41.014

read_lines

The read_lines module is used to visualize a series of points with data connected by lines. read_lines accepts three different file formats, with the APDV file format the lines are connected by boring ID, with the ELF (EVS Line File) format each line is made by defining the points that make up the line, and with the SAD (Strike and Dip) file format, there is a choice to connect each sample by ID or by Data Value.

SAD files connect by ID – If a *.sad file has been read the lines will be connected by ID.

SAD files connect by Data – If a *.sad file has been read the lines will be connected by the data component.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Object [Renderable]: Outputs to the viewer.

EVS Line File Example

Discussion of EVS Line Files

EVS line files contain horizontal and vertical coordinates, which describe the 3-D locations and values of properties of a system. Line files must be in ASCII format and can be delimited by commas, spaces, or tabs. They must have an .elf suffix to be selected in the file browsers of EVS modules. Each line of the EVS line file contain the coordinate data for one sampling location and up to 300 (columns of) property values. There are no computational restrictions on the number of lines that can be included in a file.

EVS Line Files

EVS Line Files consist of file tags that delineate the various sections of the file(s) and data (coordinates, nodal and/or cell data). The file tags are discussed below followed by portions of a few example files.

FILE TAGS:

The file tags for the ASCII file formats (shown in Bold Italics) are discussed below with a representative example. They are given in the appropriate order. If you need assistance creating software to write these file formats, please contact support@ctech.com.

COORD_UNITS “ft” Defines the coordinate units for the file. These should be consistent in X, Y, and Z.

NUM__DATA 7 1

Number of nodal data components followed by the number of cell data components.

NODE_DATA_DEF 0 “TOTHC” “log_ppm”

NODE_DATA_DEF specifies the definition of a nodal data component. The second value is the data component number, the third is the name, and the 4th is the units.

CELL_DATA_DEF 0 “Indicator” “Discreet Unit”

Definition of cell data. Same options as NODE_DATA_DEF

LINE 12 1

Beginning of a line segment is followed on the same line by the cell data values.

Following this line should be the points making up the line in the following format:

X, Y, Z coordinates followed by nodal data values.

64718.310547 37500.000000 -1250.000000 1 -1250.000000

63447.014587 35101.682129 -2000.000000 2 -2000.000000

CLOSED

This flag is used at the end of a line definition to indicate the end of the line should be connected to the beginning of the line.

END

Marks the end of the data section of the file. (Allows us to put a password on .eff files)

EXAMPLE FILE

NUM_DATA 2 0

NODE_DATA_DEF 0 “Node_Number” “Linear_ID”

NODE_DATA_DEF 1 “Distance” “Linear_ft”

LINE

1900297.026154 677367.319824 72.000000 0.000000 0.000000

1900314.256775 677438.611328 72.000000 1.000000 73.344208

1900314.687561 677442.703522 72.000000 2.000000 77.459015

1900316.410645 677447.011261 72.000000 3.000000 82.098587

1900319.641266 677447.442018 72.000000 4.000000 85.357796

1900345.487030 677441.411530 72.000000 5.000000 111.897774

1900360.563782 677439.472870 72.000000 6.000000 127.098656

1900363.579193 677447.226807 72.000000 7.000000 135.418289

1900365.517822 677447.226807 72.000000 8.000000 137.356918

1900365.948608 677438.396118 72.000000 9.000000 146.198105

1900379.733032 677436.888245 72.000000 10.000000 160.064758

1900405.578766 677432.150055 72.000000 11.000000 186.341217

1900497.331879 677416.427002 72.000000 12.000000 279.431763

1900511.331512 677414.919464 72.000000 13.000000 293.512329

1900525.762268 677411.257721 72.000000 14.000000 308.400421

1900527.269775 677405.442444 72.000000 15.000000 314.407898

1900524.900696 677399.411926 72.000000 16.000000 320.887085

1900522.531311 677391.012024 72.000000 17.000000 329.614746

1900517.362366 677357.196808 72.000000 18.000000 363.822754

1900501.854828 677266.951569 72.000000 19.000000 455.390686

1900501.639282 677262.213379 72.000000 20.000000 460.133789

1900500.777710 677255.321014 72.000000 21.000000 467.079773

1900496.470306 677250.151733 72.000000 22.000000 473.808472

1900487.208862 677241.751816 72.000000 23.000000 486.311798

1900450.378204 677201.906097 72.000000 24.000000 540.572083

1900403.568481 677152.368134 72.000000 25.000000 608.727478

1900356.758759 677102.830177 72.000000 26.000000 676.882874

1900309.949036 677053.292221 72.000000 27.000000 745.038269

1900286.257172 677028.523243 72.000000 28.000000 779.313721

1900278.718445 677022.923517 72.000000 29.000000 788.704651

1900269.672546 677024.431061 72.000000 30.000000 797.875305

1900217.334717 677035.200397 72.000000 31.000000 851.309631

1900232.196075 677097.230453 72.000000 32.000000 915.095154

1900247.057434 677159.260513 72.000000 33.000000 978.880615

1900252.226715 677179.937317 72.000000 34.000000 1000.193787

1900267.159851 677242.326401 72.000000 35.000000 1064.345215

1900282.093018 677304.715485 72.000000 36.000000 1128.496460

1900297.026154 677367.104584 72.000000 37.000000 1192.647827

END

read strike and dip

General Module Function

The read strike and dip module is used to visualize sampled locations. It places a disk, oriented by strike and dip, at each sample location. Each disk is probable and can be colored by a picked color, by Id, or by data value. If an ID is present, such as a boring ID, then there is an option to place tubes between connected disks, or those disks with similar Id’s.

Strike and dip refer to the orientation of a geologic feature. The strike is a line representing the intersection of that feature with the horizontal plane (though this is often the ground surface). Strike is represented with a line segment parallel to the strike line. Strike can be given as a compass direction (a single three digit number representing the azimuth) or basic compass heading (e.g. N, E, NW).

The dip gives the angle of descent of a feature relative to a horizontal plane, and is given by the number (0°-90°) as well as a letter (N,S,E,W, NE, SW, etc.) corresponding to the rough direction in which feature bed is dipping.

Info

We do not support the Right-Hand Rule, therefore all dip directions must have the direction letter(s).

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output [Field] Outputs the subsetted field as edges
  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the Z scaling and edge angle used to determine what edges should be displayed
  • Display Settings: controls the type and specific data to be output or displayed

Strike and Dip File Example

Discussion of Strike and Dip Files

Strike and dip files consist of 3D coordinates along with two orientation values called strike and dip. A simple disk is placed at the coordinate location and then the disk is rotated about Z to match the strike and then rotated about Y to match the dip. An optional id and data value can be used to color the disk.

Format:

You may insert comment lines in C Tech Strike and Dip (.sad) input files. Comments can be inserted anywhere in a file and must begin with a ‘#’ character.

Strike can be defined in the following ways :

  1. For strikes running along an axis:

N, S, NS, SN are all equivalent to 0 or 180, and will always have a dip to E or W

E, W, EW, WE are all equivalent to 90 or 270, and will always have a dip to N or S

NE, SW are both equivalent to 135 or 315, and can have a dip specified to N, S, E, or W

NW, SE are both equivalent to 45 or 225, and can have a dip specified to N, S, E, or W

  1. For all other strikes: any compass direction between 0 and 360 degrees can be specified, with the dip direction clarifying which side of the strike is downhill.

Dip can be defined only in degrees in the range of 0 to 90.0 followed by a direction such as 35.45E

There is no required header for this file type.

Each line of the file must contain:

X, Y, Z, Strike, Dip, ID (optional), and Data (optional).

NOTE: The ID can only contain spaces if enclosed in quotation marks (ex “ID 1”).

EXAMPLE FILE

x y z strike dip

51.967 10.948 26.127 35.205 59.8031E

50.373 33.938 26.127 13.048 68.49984E

51.654 60.213 26.127 139.18 76.74215E

50.529 83.203 26.127 213.50 62.94599E

64.358 76.634 11.471 114.23 80.38694E

66.430 33.938 -6.849 41.421 60.38837E

75.901 50.360 -21.505 60.141 72.88960E

72.943 7.663 -21.505 5.255 65.51247E

101.90 30.654 -72.801 77.675 65.9524E

81.339 50.360 -43.489 244.95 70.7079E

72.263 73.350 -21.505 82.929 69.3159E

89.897 73.350 -61.809 31.531 55.6570E

END

FILE TAGS:

The file tags for the ASCII file formats (shown in Bold Italics) are discussed below with a representative example. They are given in the appropriate order. If you need assistance creating software to write these file formats, please contact support@ctech.com.

COORD_UNITS “ft” Defines the coordinate units for the file. These should be consistent in X, Y, and Z.

END (this is optional, but should be used if any lines will follow your actual data lines)

read glyph

read glyph replaces the Glyphs sub-library that was in the tools library. It reads glyphs saved in any of the three primary EVS field file formats and allows you to modify the shape and orientation of the glyph to allow it to be used in various modules that emply glyphs in slightly different ways. These include glyphs at nodes, place_glyph,drive_glyphs, advector, post_samples, etc. Most modules EXCEPT post_samples will use the glyphs without chaning the default alignment. The supported file formats are:

  1. .eff ASCII format, best if you want to be able to open the file in an editor or print it

  2. .efz GNU Zip compressed ASCII, same as .eff but in a zip archive

  3. .efb binary compressed format, the smallest & fastest format due to its binary form

For a description of the .EFF file formats click here.

The objects saved in the .efx files should be simple geometric objects ideally designed to fit in a unit box centered at the origin (0,0,0). For optimal performance the objects should not include nodal or cell data. You may create your own objects or use any of the ones that C Tech supplies in the ctech\data\glyphs folder.

Module Output Ports

  • Output [Field] Outputs the saved glyph.
  • Output Object [Renderable]: Outputs to the viewer.

General Module Function

The import geometry module will read STL, PLY, OBJ and .G files containing object geometries.

This module provides the user with the capability to integrate site plans, topography, buildings, and other 3D features into the EVS visualizations.

Info

This module intentionally does not have a Z-Scale port since this class of files are so often not in a user’s model projected coordinate system. Instead we are providing a Transform Settings group that allows for a much more complex set of transformations including scaling, translations and rotations.

Module Output Ports

  • Output [Field] Outputs the CAD layers.
  • Output Object [Renderable]: Outputs to the viewer

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

Transform Settings: This allows you to add any number of Translation or Scale transformations in order to place your Wavefront Object in the same coordinate space as the rest of your “Real-World” model. It is very typical that Wavefront Objects are in a rather arbitrary local coordinate system that will have no defined transformation to any standard coordinate projection.

Generally you should know if the coordinates are feet of meters and if those are not correct, do that scaling as your first set of transforms.

It will be up to you to determine the set of translations that will properly place this object in your model. Hopefully rotations will not be required, but they are possible with the Transform List.

  • write evs field

    write evs field The write evs field module creates a file in one of several formats containing the mesh and nodal and/or cell data component information sent to the input port. This module is useful for writing the output of modules which manipulate or interpolate data (3d estimation , 2d estimation, etc.) so that the data will not need to be processed in the future.

  • export web scene

    export web scene connects via the vew port and writes all objects in your view as a 'C Tech Web Scene' (*.ctws), a single file which you and your cus

  • export pdf scene

    export pdf scene connects via the vew port and writes all objects in your view as a .evspdf file that C Tech's PDF Converter can convert to a 3D PDF.

  • export 3d scene

    This module will export the entire view (model) in the following formats to allow importing to other 3D modeling software:

  • export nodes

    export nodes export nodes provides a means to export an ASCII file containing the coordinates (and optionally the data) of any object in EVS. The output contains a header line and one row for each node in the input field. Each row contains the x, y, & z coordinates and optionally node number and nodal data.

  • export cad

    export cad General Module Function export cad will output one or more individual objects (red port) or your complete model (purple input port from the viewer). Volumetric objects in EVS are converted to surface and line type objects. export cad preserves the colors of all cells and objects by assigning cell colors to each AutoCAD surface or line entity according to the following procedure:

  • export surface to raster

    export surface to raster The export surface to raster module will create a raster file in the GeoTiff format. It takes any input field, and writes a raster (in plan view) of the data provided from that field. Regions outside of the input area are masked with an appropriate NoData flag. A single data component (node or cell) can be exported to the GeoTiff file.

  • export vector gis

    export vector gis The export vector gis module will create a file in one of the following vector formats: ESRI Shapefile (.shp); GMT ASCII Vectors (.gmt); and MapInfo TAB (*.tab). Although C Tech allows non-ASCII analyte names, ESRI does not. Please see this link on acceptable shapefile field (attribute) names. It basically says that only A-Z, a-z, 0-9 and “” are allowed. The only thing we can do when writing a shapefile is to change any unacceptable (non-ASCII) character to “” and add a number if there are more than one.

  • export horizon to raster

    export horizon to raster export horizon to raster is used in conjunction with gridding and horizons with rectilinear grids of geologic data. A large number of formats are supported such as Surfer and ESRI grids. For some formats, each cell in your grid should be the same size. This will require you to adjust the extents of your grid and set the grid resolution according to:

  • write lines

    write_lines The write_lines module is used to save a series of points with data connected by lines. These lines are stored in the EVS Line File format. Module Input Ports Input Field [Field] Accepts a field with or without data which represents lines

  • export horizons to vistas

    export horizons to vistas export horizons to vistas is used in conjunction with gridding and horizons. gridding and horizons can create finite difference grids based on your geologic data. It writes the fundamental geologic grid information to a file format that Ground Water Vistas can read. The output includes the x,y origin; rotation; and x-y resolutions in addition to descriptive header lines proceeded by a “#”.

Subsections of Export

write evs field

The write evs field module creates a file in one of several formats containing the mesh and nodal and/or cell data component information sent to the input port.

This module is useful for writing the output of modules which manipulate or interpolate data (3d estimation , 2d estimation, etc.) so that the data will not need to be processed in the future.

The saved and processed data can be read using read evs field, which is much faster than reprocessing the data.

Principal recommended format: EF2

  • The newest and strongly recommended format is EF2. This format is capable of containing additional field data and mesh types which are not supported in our Legacy format. Please note that this is the only LOSSLESS format for current and future EVS fields. Although the files created in EF2 format are generally larger than >EFBs, the further subsetting and/or processing of these updated fields can be dramatically more efficient.

    • Uniform fields
    • Geology (from gridding and horizons)
    • Structured fields (such as irregular fields read in from Read_Field)
    • Unstructured Cell Data (UCD format) general grids with nodal and/or cell data
    • Special fields containing spheres (which are points with radii)
    • Special fields containing color data (such as LIDAR data)

Legacy formats:

  • The legacy formats below were the recommended formats in software releases before 2024. With our enhancements to EVS Fields, these formats must be considered LOSSY, meaning that some data and the (EF2) optimized grids will be compromised if these formats are use. We strongly recommend using theEF2 format.
    • .eff ASCII format, best if you want to be able to open the file in an editor or print it. For a description of the .EFF file formats click here.
    • .efz GNU Zip compressed ASCII, same as .eff but in a zip archive
    • .efb binary compressed format, the smallest & fastest format due to its binary form

Module Input Ports

  • Geologic legend Information [Geology legend] Accepts the geologic material information for the legend module.
  • Input Field [Field] Accepts the field to be saved.
  • File Notes [String / minor] Accepts a string to document the settings used to create the field.

Module Parameters

There are only a few parameters in write evs field, but they provide important functionality and should be understood.

  • Check for Cell Set Data (EF2 Only): Causes any cell data that is constant across a cell set to be saved as cell set data. This is more efficient and is recommended.
  • Translate by (Application) Origin): Normally on, this should be turned off if the contents represent content which is not in your application origin. Examples are glyphs or inputs to modules such as cross section tubes
  • LEGACY FILE OPTIONS
    • Split Into Separate Files: This toggle applies only to EFF format files and makes it easier to create your own EFF files from similar data. It separates the header file (.eff) from the coordinates, data and connectivity.
    • Force Nodal in Output: This toggle is on by default and ensures that fields without data are tagged as having data because many EVS modules may not allow connections for fields without data. It does not add data, it only tags the file as having data (even if it doesn’t)
    • Force Cell in Output: Similar to the toggle above, but needed far less often.

export web scene connects via the vew port and writes all objects in your view as a “C Tech Web Scene” (*.ctws), a single file which you and your customers can load and view at: https://viewer.ctech.com/

Details on its use at How to Create C Tech Web Scenes.htm

WARNINGS:

  • DATAMAPS ARE USED FOR PROBING: When using unlinked values (Min and Max) such that the resulting datamap is a subset of the true data range, probing in C Tech Web Scenes will only be able to report values within the truncated data range. Values outside that limited range will display the nearest value within the truncated range. This applies to the use of the Datamap parameters in post samplesor when the data range is truncated by clipping in the estimation modules or with the change min max module.

export pdf scene connects via the vew port and writes all objects in your view as a .evspdf file that C Tech’s PDF Converter can convert to a 3D PDF. This module requires a valid PDF Converter license in order to function.

This module will export the entire view (model) in the following formats to allow importing to other 3D modeling software:

  • glTF 2.0 (.glb binary format)
  • FBX (.fbx)
  • COLLADA (.dae)

All files are written in a coordinates system where the X-Y origin (0,0) is the Application Origin. This is done to preserve precision in these formats which are fundamentally single precision.

export nodes

export nodes provides a means to export an ASCII file containing the coordinates (and optionally the data) of any object in EVS. The output contains a header line and one row for each node in the input field. Each row contains the x, y, & z coordinates and optionally node number and nodal data.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Input Field [Field] Accepts a field with or without data

export cad

General Module Function

export cad will output one or more individual objects (red port) or your complete model (purple input port from the viewer). Volumetric objects in EVS are converted to surface and line type objects.

export cad preserves the colors of all cells and objects by assigning cell colors to each AutoCAD surface or line entity according to the following procedure:

a) If nodal data is present, the first nodal data component is averaged to the cells and that color is applied. This is equivalent to the appearance of surfaces in EVS with flat shading mode applied.

b) If no nodal data is present, but cell data is, that color is applied. This is equivalent to the appearance of surfaces in EVS with flat shading mode applied.

c) If neither nodal or cell data is present the object’s color is used.

The results should look fairly similar to the viewer in EVS except:

  • AutoCAD has a very limited color palette with only 256 total colors. With some datamaps this limitation will be more problematic and it is possible that the nearest AutoCAD color may apply to multiple colors used in a subtle geology datamap.
  • AutoCAD lacks of Gouraud shading support (as mentioned above) so all cells are flat shaded.

All “objects” in EVS are converted to separate layers based upon the EVS object name (as shown in the viewer’s Object_Selector).

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • View [View] Connects to the viewer to receive all objects in the view
  • Input Object [Renderable]: Receives inputs from one or more module’s red port

export surface to raster

The export surface to raster module will create a raster file in the GeoTiff format.

It takes any input field, and writes a raster (in plan view) of the data provided from that field. Regions outside of the input area are masked with an appropriate NoData flag. A single data component (node or cell) can be exported to the GeoTiff file.

Raster resolution can be controlled via the Grid Cell Size parameter, which will default (when linked) to a size which generates a raster of up to four million pixels, with fewer generated depending on how much the input shape deviates from having square extents.

When exporting certain cell data, such as Lithology, connecting the Geologic Legend Information port will allow the raster to include additional metadata in a raster dataset attribute table file. This additional file will allow programs such as ESRI’s ArcGIS Pro to automatically load the GeoTiff with proper names associated with each material.

Module Input Ports

  • Input Field [Field] Accepts a field with data to export
  • Geologic Legend Information Accepts the geologic information from an appropriate module, such as lithologic modeling, to associate data with names

export vector gis

The export vector gis module will create a file in one of the following vector formats: ESRI Shapefile (*.shp); GMT ASCII Vectors (*.gmt); and MapInfo TAB (*.tab).

Although C Tech allows non-ASCII analyte names, ESRI does not. Please see this link on acceptable shapefile field (attribute) names. It basically says that only A-Z, a-z, 0-9 and “_” are allowed. The only thing we can do when writing a shapefile is to change any unacceptable (non-ASCII) character to “_” and add a number if there are more than one.

If you plan to create a shapefile it will be better to change the analyte names to an ASCII equivalent that is more meaningful, but uses on the acceptable character set.

Info

Make sure to connect export vector gis after explode_and_scale to ensure that z-scaling is properly compensated.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Input Field [Field] Accepts a field with or without data

export horizon to raster

export horizon to raster is used in conjunction with gridding and horizons with rectilinear grids of geologic data. A large number of formats are supported such as Surfer and ESRI grids. For some formats, each cell in your grid should be the same size. This will require you to adjust the extents of your grid and set the grid resolution according to:

Cell size = (Max:xy - Min:xy) / (grid-resolution -1)

NOTE: YOU MUST SELECT RECTILINEAR GRIDDING IN gridding and horizons

Module Input Ports

  • Geology Export Output [Vistas Data] Accepts output from gridding and horizons for conversion to raster grids.

write_lines

The write_lines module is used to save a series of points with data connected by lines. These lines are stored in the EVS Line File format.

Module Input Ports

  • Input Field [Field] Accepts a field with or without data which represents lines

export horizons to vistas

export horizons to vistas is used in conjunction with gridding and horizons. gridding and horizons can create finite difference grids based on your geologic data.

It writes the fundamental geologic grid information to a file format that Ground Water Vistas can read.

The output includes the x,y origin; rotation; and x-y resolutions in addition to descriptive header lines proceeded by a “#”.

Module Input Ports

  • Geology Export Output [Vistas Data] Accepts output from gridding and horizons for conversion to Groundwater Vistas format
  • driven sequence

    The driven sequence module controls the semi-automatic creation of sequences for the following modules:

  • scripted sequence

    The scripted sequence module provides the most power and flexibility, but requires creating a Python script which sets the states of all modules to be

  • object sequence

    This is the simplest of the sequence modules, but also the easiest to abuse (vs. using scripted sequence where you can be more efficient).

Subsections of Sequences

The driven sequence module controls the semi-automatic creation of sequences for the following modules:

  • slice
  • cut
  • plume
  • plume shell

Control over these modules is via the purple “Sequence Output” ports on the driven modules and the “Sequence Input” port on driven sequence.

All modules to be grouped in the Sequence must have their red output ports connected to driven sequence instead of the viewer. Consider driven sequence to act like a group objects module.

Module Input Ports

  • Sequence Input Receives special port from only plume, plume shell, and slice modules
  • Input Objects [Field] Accepts any number red object ports from modules changed within the sequence.

Module Output Ports

  • Current State Title
  • Output Object [Renderable]: Outputs to the viewer.

Other modules not listed above may be included if one of the “driven modules” controls those modules. Examples are titles, isolines, band data, etc.

driven sequence has only a Current State slider which allows you to test the sequence or directly access any state. The latter is useful when using this module in EVS or EVS Presentations. Please note that in this case, selecting a state requires that all controlled modules must run. This is much slower than selecting a (saved) state of a .CTWS file.

The State Name output port provides a simple way to include a title which specifies the current displayed state in sequences.

The Driven Modules have the bulk of the settings which determine what controls and states will be available such as:

  • Use Sequencing: This toggle must be on when using driven sequence with a driven module.

  • State Control: Choose between

    • Slider
    • Combo Box
    • List Box
  • Sequence Type: Choose between

    • By Count (set the number of states between minimum and maximum values)
    • By Step Size (set the increment between each step)
  • State Titles are automatically generated for you

Please note: For Log Data, the state values are determined using the same logic (algorithm) which we have traditionally applied to modules such as isolines, band data and legend. This means that for:

  • By Count: If the number of Frames results in:

    • 2 steps per decade, then the states will be .1, .3, 1, 3, 10, 30, etc.
    • 3 steps per decade, then the states will be .1, .2, .5, 1, 2, 5, 10, 20, 50, etc.
  • By Step Size: If the increment is:

    • 0.5 , then the states will be .1, .3, 1, 3, 10, 30, etc.
    • 0.33333, then the states will be .1, .2, .5, 1, 2, 5, 10, 20, 50, etc.

The scripted sequence module provides the most power and flexibility, but requires creating a Python script which sets the states of all modules to be in the sequence.

Module Input Ports

  • Input Value 1 [Number] Accepts a number
  • Input Value 2 [Number] Accepts a number
  • Input Value 3 [Number] Accepts a number
  • Input Value 4 [Number] Accepts a number
  • Input Objects [Field] Accepts any number red object ports from modules changed within the sequence.

Module Output Ports

  • Output Value 1 [Number] Passes the number
  • Output Value 2 [Number] Passes the number
  • Output Value 3 [Number] Passes the number
  • Output Value 4 [Number] Passes the number
  • Current State Title
  • Output Object [Renderable]: Outputs to the viewer.

The process for using this module is:

  1. Determine which modules’ output will be affected (controlled) by the Python script and therefore contained in one or more states.
  2. Connect the red output ports of those modules to scripted sequence instead of the viewer
  3. Set the number of states and their names. This can be done manually or in a secondary (separate) Python Script.
  4. Choose and set the State Control type: Choose between
    • Slider
    • Combo Box
    • List Box
  5. Create and test the Python script which will control all modules, which must be set under Filename.

This is the simplest of the sequence modules, but also the easiest to abuse (vs. using scripted sequence where you can be more efficient).

You create “states” merely by connecting modules (including groups) to the object sequence’s input port. This module works much like a group object module, in that you can rearrange the order of the modules within, each of which creates a “state” named with that module’s (or group’s) name.

Module Input Ports

  • Input Objects [Field] Accepts any number red object ports from modules changed within the sequence.

Module Output Ports

  • Current State Title
  • Output Object [Renderable]: Outputs to the viewer.
  • 3d streamlines

    3d streamlines The 3d streamlines module is used to produce streamlines or stream-ribbons of a field which is a 2 or 3 element vector data component on any type of mesh. Streamlines, which are simply 3D polylines, represent the pathways particles would travel based on the gradient of the vector field. At least one of the nodal data components input to streamlines must be a vector. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. Streamlines are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

  • surface streamlines

    surface streamlines The surface streamlines module is used to produce streamlines on any surface based on its slopes. Streamlines are 3D polylines representing the paths particles would travel based on the slopes of the input surface. The direction of travel of streamlines can be specified to be downhill or uphill for the slope case. A physics simulation option is also available which employs a full physics simulation including friction and gravity terms to compute streamlines on the surface.

  • create drill path

    The create drill path module allows you to interactively create a complex drill path with multiple segments.

  • modpath

    modpath The modpath module uses the cell by cell flow values generated from a MODFLOW project along with head values and other MODFLOW parameters to trace the path of a particle of water as it moves through the ground. The paths are calculated using the same algorithms used by U.S. Geological Survey MODPATH and the results should be similar.

  • scalars to vector

    scalars to vector The scalars to vector module is used to create an n-length vector by combining n selected scalar data components. The vector length is determined by the Vector Type selector (2D or 3D). Once the required number of components has been selected, any other data components are grayed out and not selectable. To change selections, first deselect one of the vector components and then select a new component. If no components are selected, then all components are selectable. The order in which the components are selected will determine in which order they occur in the vector.

  • vector to scalars

    The vector to scalars module converts all vector nodal data components into individual scalars. For example, a vector data component named 'velocity'

  • vector magnitude

    vector magnitude The vector magnitude module calculates the vector magnitude of a vector field data component at every node in a mesh. Input to vector magnitude must contain a mesh of any type and nodal data. Nodal data components can be scalar or vector with up to 3 vector subcomponents. Module Input Ports Input Field [Field] Accepts a vector data field Module Output Ports

  • gradient

    gradient The gradient module calculates the vector gradient field of a scalar data component at every node in a mesh. Input to gradient must contain a mesh of any type and nodal data, with at least one scalar nodal data component. Gradient uses a finite-difference method based on central differencing to calculate the gradient on structured (rectilinear) meshes. Shape functions and their derivatives are used to calculate the gradient on unstructured meshes.

  • capture zone

    capture_zone The capture_zone module utilizes 3d streamlines technology to determine the volumetric regions of your model for which groundwater flow will be captured by one or more extraction wells. Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration). Input Field [Field] Accepts a field with vector data. Well Nodes [Field] Accepts a field of points representing the well locations Module Output Ports

  • seepage velocity

    seepage_velocity The seepage_velocity module is used to compute the vector groundwater flow field visualizations of the vector field. The input data requirements for the seepage_velocity module are: A data component representing head (can have any name). A Geo_Layer data component. A Material_ID data component. If there is no Material_ID, we treat each layer as a separate material. Layer 0 becomes material -1 Layer 1 becomes material -2 Layer 2 becomes material -3, etc. Note: If you use 3d estimation to krige head data with geologic input (in Version 6.0 or later) your output will meet these criteria (provided you toggle on these data components under Kriging Parameters).

  • regional averages

    regional_averages The regional_average module averages nodal data values from the input field that fall into the input polygon regions. It then outputs a point for each region that contains the average x, y coordinates and the average of each selected nodal data component. These polygons must contain at least 1 cell data component representing the regional ID.

Subsections of Modeling

3d streamlines

The 3d streamlines module is used to produce streamlines or stream-ribbons of a field which is a 2 or 3 element vector data component on any type of mesh. Streamlines, which are simply 3D polylines, represent the pathways particles would travel based on the gradient of the vector field. At least one of the nodal data components input to streamlines must be a vector. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. Streamlines are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Locations Field [Field] Accepts the starting points for each line

Module Output Ports

  • Output Field [Field] Outputs the streamlines or ribbons
  • Output Object [Renderable]: Outputs to the viewer.

surface streamlines

The surface streamlines module is used to produce streamlines on any surface based on its slopes. Streamlines are 3D polylines representing the paths particles would travel based on the slopes of the input surface. The direction of travel of streamlines can be specified to be downhill or uphill for the slope case. A physics simulation option is also available which employs a full physics simulation including friction and gravity terms to compute streamlines on the surface.

The Physics radio buttons allow the user to specify whether streamlines will be computed based on the slopes of the surface only or whether a full physics simulation including friction and gravity terms will be used to compute streamlines on the surface. When Gravity is selected Segments perCell and Order do not apply but additional parameters appear for the module. These are:

Integration Time Step is the time step for the numerical integration of the paths. For typical gravity units (like 32 feet per second-squared) this value is in seconds.

Gravity is the coefficient of gravity for your units. If your coordinate units are feet, the appropriate (default) value would be 32 feet per second-squared.

Viscosity Coefficient (v) is the friction term that depends on velocity.

Drag Coefficient (v2) is the friction term that depends on velocity-squared.

Module Input Ports

  • Input Surface [Field] Accepts a data field which must be a surface with elevation data.
  • Input Locations Field [Field] Accepts the starting points for each line

Module Output Ports

  • Output Field [Field] Outputs the streamlines
  • Output Object [Renderable]: Outputs to the viewer.

The create drill path module allows you to interactively create a complex drill path with multiple segments.

Each segment can be defined by one of three methods:

  1. Continue Straight: for the specified “Total Length” along the current direction or Initial Drill Direction, if just starting.
  2. Target Coordinate: Begin deviating with specified “Segment Length” and specified “Max Angle of Change” (per segment) until you reach the specified “(X,Y,Z)” coordinate.
  3. Move to Heading: Begin deviating with specified “Segment Length” and specified “Max Angle of Change” (per segment) until you reach the specified “Heading” and “Dip”

modpath

The modpath module uses the cell by cell flow values generated from a MODFLOW project along with head values and other MODFLOW parameters to trace the path of a particle of water as it moves through the ground. The paths are calculated using the same algorithms used by U.S. Geological Survey MODPATH and the results should be similar.

The modpath module at this point does not handle transient simulations the same way that the U.S.G.S. MODPATH does. It treats each time step as a steady state model, and uses the parameters from the .dwr/.dwz file based on the starting time.

A valid modpath field file (.eff/.efz) should contain the following as cell data components: Head; CCF; ELEV_TOP; ELEV_BOT; and POROSITY. The Head component should contain the head value for each cell, the ELEV_TOP and ELEV_BOT should components should contain the elevation of the top of the cell, and the elevation of the bottom of the cell respectively, and the POROSITY should contain the flow due to porosity for that each cell. All other MODFLOW parameters (drains, wells, recharge, etc..) should be written into a .dwr/.dwz file.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modules
  • Input Field [Field] Accepts a data field.
  • Input Starting Locations [Field] Accepts the starting points for each line
  • Start Date [Number] The starting time
  • Ending Date [Number] The ending time

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the streamlines
  • Start Date [Number] The starting time
  • Ending Date [Number] The ending time
  • Output Object [Renderable]: Outputs to the viewer.

scalars to vector

The scalars to vector module is used to create an n-length vector by combining n selected scalar data components. The vector length is determined by the Vector Type selector (2D or 3D).

Once the required number of components has been selected, any other data components are grayed out and not selectable. To change selections, first deselect one of the vector components and then select a new component. If no components are selected, then all components are selectable. The order in which the components are selected will determine in which order they occur in the vector.

Module Input Ports

  • Input Field [Field] Accepts a data field with 2 or more nodal data components.

Module Output Ports

  • Output Field [Field] Outputs the field with selected data
  • Output Object [Renderable]: Outputs to the viewer.

The vector to scalars module converts all vector nodal data components into individual scalars. For example, a vector data component named “velocity” will be converted to three scalar nodal data components such as:

  1. velocity_x
  2. velocity_y
  3. velocity_z

If multiple vector data components exist in the field, all will be converted.

Module Input Ports

  • Input Field [Field] Accepts a data field with 1 or more vector nodal data components.

Module Output Ports

  • Output Field [Field] Outputs the field with vector data converted to scalars.

vector magnitude

The vector magnitude module calculates the vector magnitude of a vector field data component at every node in a mesh. Input to vector magnitude must contain a mesh of any type and nodal data. Nodal data components can be scalar or vector with up to 3 vector subcomponents.

Module Input Ports

  • Input Field [Field] Accepts a vector data field

Module Output Ports

  • Output Field [Field] Outputs the scalar data field
  • Output Object [Renderable]: Outputs to the viewer

Related Modules

gradient

gradient

The gradient module calculates the vector gradient field of a scalar data component at every node in a mesh. Input to gradient must contain a mesh of any type and nodal data, with at least one scalar nodal data component. Gradient uses a finite-difference method based on central differencing to calculate the gradient on structured (rectilinear) meshes. Shape functions and their derivatives are used to calculate the gradient on unstructured meshes.

Please note that the gradient of (pressure) head points in the direction of increasing head, not the direction that groundwater would flow. Please see the seepage_velocity module if you wish to compute groundwater flow

Module Input Ports

  • Input Field [Field] Accepts a data field

Module Output Ports

  • Output Field [Field] Outputs the vector data field
  • Output Object [Renderable / Minor]: Outputs to the viewer

Related Modules

->vector magnitude

capture_zone

The capture_zone module utilizes 3d streamlines technology to determine the volumetric regions of your model for which groundwater flow will be captured by one or more extraction wells.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with vector data.
  • Well Nodes [Field] Accepts a field of points representing the well locations

Module Output Ports

  • Output Field [Field] Outputs the volumetric regions which are captured

seepage_velocity

The seepage_velocity module is used to compute the vector groundwater flow field visualizations of the vector field.

The input data requirements for the seepage_velocity module are:

  1. A data component representing head (can have any name).
  2. A Geo_Layer data component.
  3. A Material_ID data component. If there is no Material_ID, we treat each layer as a separate material. Layer 0 becomes material -1 Layer 1 becomes material -2 Layer 2 becomes material -3, etc.

Note: If you use 3d estimation to krige head data with geologic input (in Version 6.0 or later) your output will meet these criteria (provided you toggle on these data components under Kriging Parameters).

The Run toggle determines if the module runs immediately when you change conductivity values.

Head Data Component determines which data component is used to scale and rotate the seepage_velocity velocity vectors. The default selection is the first data component. The Map component radio button list also displays all data components passed to seepage_velocity. Map component determines which data component is used to color the seepage_velocity velocity vectors. By default, the first (0th) data component is selected.

Head Data Component list displays all data components passed to seepage_velocity.

Current Material: allows you to select the Material (or geologic layer) to assign conductivity and porosity properties.

HeadUnits radio button list allows you to specify the units of your head data.

Output Conductivity Units: radio button list allows you to choose the units for specifying the conductivity in all three (x, y, z) directions for each geologic layer. You can choose any units (regardless of your head and coordinate units) and the appropriate conversions will be made for you.

The Conductivity sliders (with type-ins) allow you to change the log10 of the x, y, & z conductivity. These specify log values because conductivities vary over many orders of magnitude. These update when the (Linear) type-ins are changed.

The Conductivity type-ins allow you to change the x, y, & z conductivity. These are actual values and update when the sliders are changed.

The Effective Porosity slider (with type-in buttons) allows you to change the value of effective porosity.

Material (#/Name): allows you to specify the material type if it is not specified in your geologic layers. This is only to help you assign proper conductivities.

Data passed to the field port must be a 3D mesh with data representing heads and normally multiple Materials (or geologic layers).

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a data field with geologic and head data

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field] Outputs the vector data field

Technical Details

Inherent in the solution of seepage velocity implemented in this module is the assumption that within each geologic layer/material the conductivities are uniform. Clearly, this will never be completely accurate, however we would contend that there is seldom if ever a better measure of the site conductivities (true conductivity tensor) than the site heads because head is far easier to measure. Furthermore, geologic materials can be deposited such that their conductivities are very complex and directional and most groundwater models (e.g. MODFLOW) do not provide a way to reflect this EVEN IF IT COULD BE MEASURED.

This approach allows users to quickly investigate the impact on flow paths due to changes in the conductivity assigned to each layer/material, BASED ON THE MEASURED/KRIGED HEAD DISTRIBUTION. Clearly, the more accurately the head is characterized the better.

At this point, we don’t propose to provide a mechanism to account for conductivity variations within a geologic layer. We obviously cannot account for natural or artificial barriers (low conductivity regions) UNLESS they are represented by the geologic materials.

Our approach is:

Compute the true seepage velocity (Vx, Vy, Vz) at each node, by taking the gradient of (kriged) head (without any z-exaggeration) and multiplying each component of head gradient by the component of conductivity at that node (based on its material) (Kx, Ky, Kz) and dividing by the Effective Porosity for that material.

Vx = dH/dx * Kx / Ne

Vy = dH/dy * Ky / Ne

Vz = dH/dz * Kz / Ne

Darcy Flux = -K * (dh/dl), also known as Darcy Velocity, Specific Discharge or apparent velocity, and

Seepage Velocity = -K * (dh/dL) / ne, where:

  • K = hydraulic conductivity, is the proportionality constant reflecting the ease with which water flows through a material (L/T)
  • dh = difference in hydraulic head between two measuring points as defined for Equation 14 (L)
  • dL = length along the flow path between locations where hydraulic heads are measured (L)
  • dh/dL = gradient of hydraulic head (dimensionless)
  • ne = effective porosity

regional_averages

The regional_average module averages nodal data values from the input field that fall into the input polygon regions. It then outputs a point for each region that contains the average x, y coordinates and the average of each selected nodal data component.

These polygons must contain at least 1 cell data component representing the regional ID.

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Input Surface [Field] Accepts a cell data field defining a regions

Module Output Ports

  • Output Field [Field] Outputs the processed field.
  • Output Object [Renderable]: Outputs to the viewer
  • draw lines

    draw_lines The draw_lines module enables you to create both 2D and 3D lines interactively with the mouse. The mouse gesture for line creation is: depress the Ctrl key and then click the left mouse button on any pickable object in the viewer. The first click establishes the beginning point of the line segment and the second click establishes each successive point.

  • polyline processing

    polyline processing The polyline processing module accepts a 3D polyline and can either increase or decrease the number of line segments of the polyline. A splining algorithm smooths the line trajectory once the number of points are specified. This module is useful for applications such as a fly over application (along a polyline path drawn by the user). If the user drawn line is jagged with erratically spaced line segments, polyline spline smooths the path and creates evenly spaced line segments along the path.

  • triangulate polygons

    triangulate_polygons triangulate_polygons converts a closed polyline into a triangulated surface. This surface can be extruded or used by the distance to 2d area module to perform areal subsetting of 3D models. Polylines with WIDTH in AutoCAD DWG files are converted by import_cad into triangle strips of the specified width. As you zoom in on polylines with width, the apparent width will change, whereas the apparent width of lines DOES NOT change. However, once they are triangles, they DO NOT define a closed area and therefore would not work with triangulate_polygons.

  • triangle refinement

    triangle refinement triangle refinement is primarily for use with distance to surface. It can subdivide triangular and quadrilateral cells until none of the sides of the output triangles exceed a user specified length (a default value is calculated as 5% of the x-y extent of your input surface). This increases the accuracy of distance to surface especially when the input surface comes from create_tin and the nodes used to create the TIN are poorly spaced. It can also correct the normals of a surface. It does this by organizing all of the triangles and quadrilaterals in a surface into disjoint patches, and then allowing the user to select which patches have normals that need to be flipped. The maximum number of triangles in a patch is 130,000, any triangles above this number will be considered to be in the next patch.

  • tubes

    tubes The tubes module is used to produce open or closed tubes of constant or data dependent radius using 3D lines or polylines as input. Tube size, number of sides and data dependent coloring is possible. Rotation of the tubes are done with the Phase slider (or type-in), which is specified in degrees. There are two methods used to maintain continuity of the tube orientation as the path meanders along a 3D path. These are specified as the Phase Determination method:

  • volumetric_tunnel

    volumetric_tunnel The volumetric_tunnel module allows you to create a volumetric tunnel model that is defined by a polygonal surface cross-section along a complex 3D path. Once this volumetric grid is defined, it can be used as input to various modules to map analyte and/or geologic data onto the tunnel. These include: 3d estimation: external grid port: to map analytical data lithologic modeling: external grid port: to map lithologic data interp_data to map analytical data interp_cell_data: to map stratigraphic or lithologic material data The requirements for the tunnel path and cross-section are:

  • cross section tubes

    cross_section_tubes The cross_section_tubes module is used to produce open or closed tubes of user defined cross-section and constant or data dependent radius using 3D lines or polylines as input for the centerline and a single 2D polyline as the cross-section of the tubes. Module Input Ports Input Field [Field] Accepts a field with or without data containing lines which represent the paths of the tubes. Input Cross Section Field [Field] Accepts a field which has the cross-section of the tubes. Rotation of the cross-section is done with the Phase slider (or type-in), which is specified in degrees. There are two methods used to maintain continuity of the tube orientation as the path meanders along a 3D path. These are specified as the Phase Determination method:

  • extrude

    extrude The extrude module accepts any mesh and adds one to the dimensionality of the input by extruding the mesh in the Z direction. The interface enables changing the height scale for extruded cells and extruding by a constant, any nodal or cell data component. This module is often used with the import vector gis module to convert polygonal shapefiles into extruded volumetric cells.

  • drive glyphs

    drive_glyphs The drive_glyph module provides a way to move any object (glyph or object from Read_DXF, etc.) along multiple paths to create a “driving” animation. Module Input Ports drive_glyphs has three input ports. Data passed to the first port is the paths to follow (normally from read_lines). The second port accepts the glyph or vehicle to drive, usually read in with the read glyph module.

  • place glyph

    place_glyph General Module Function The place_glyph module is used to place a single scalable geometric objects (glyph) at an interactively determined location.

  • glyphs at nodes

    glyphs at nodes The glyphs at nodes module is used to place geometric objects (glyphs) at nodal locations. The glyphs can be scaled, rotated and colored based on the input data. If the input data is a vector, the glyph can be scaled and rotated to represent the direction and absolute magnitude of the vector field. In a scalar data field, the objects can be scaled based on the magnitude of the scalar. The glyphs can represent the data field of one data component while being colored by another data component. Arrow glyphs are commonly used in vector fields to produce visualizations of the vector field.

  • create fault surface

    create_fault_surface The create_fault_surface module creates a 3D grid that is aligned to a specified strike and dip. Module Input Ports Z Scale [Number] Accepts Z Scale (vertical exaggeration). Input Field [Field] Accepts a field to extract its extent Module Output Ports Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules Output Field [Field / Minor] Outputs the surface Fault Surface [Renderable]: Outputs to the viewer

  • create grid

    create_grid The create_grid module produces a 2D or 3D uniform grid that can be used for any purpose. A typical use is starting points for 3d streamlines or advector. In 2D (default) mode it creates a rectangle of user adjustable grid resolution and orientation. In 3D mode it creates a box (3D grid). The number of nodes will depend on the X, Y & optional Z resolutions as well as the cell type specified.

Subsections of Geometry

draw_lines

The draw_lines module enables you to create both 2D and 3D lines interactively with the mouse.

The mouse gesture for line creation is: depress the Ctrl key and then click the left mouse button on any pickable object in the viewer. The first click establishes the beginning point of the line segment and the second click establishes each successive point.

draw_lines allows adding of points that are outside the model extents, undoing of the last picked point, and the clearing of all picked points. Unlike most modules which create mesh data to used by other modules, the draw_lines module receives input from the viewer, and also passes on field data to be used by other modules.

There are two drawing modes:

  1. Top View Mode creates 2D lines which are always at Z=0.0. You must be in a Top View to draw with this mode, but you may pick points anywhere in the viewer screen.

  2. Object Mode creates 3D lines which are drawn by probing objects in your model. You cannot draw at a point without having an object there or specifying a coordinate using the x-y-z type-ins.

NOTE: Because draw_lines saves your lines with your application, when an application is saved, the purple port is automatically disconnected from the viewer. This ensures that when you load an application the resulting objects (lines, fence-diagrams, etc.) will look exactly the same as when you saved the application. However, if you wish to draw new lines you will need to reconnect the purple port from the viewer.

Module Input Ports

  • View [View] Connects to the viewer to receive the extent of all objects in the viewer for scaling lines and drawing on objects.

Module Output Ports

  • Output Field [Field / minor] Outputs the field with the scaling and exploding applied.
  • Sample Data [Renderable]: Outputs to the viewer.

polyline processing

The polyline processing module accepts a 3D polyline and can either increase or decrease the number of line segments of the polyline. A splining algorithm smooths the line trajectory once the number of points are specified. This module is useful for applications such as a fly over application (along a polyline path drawn by the user). If the user drawn line is jagged with erratically spaced line segments, polyline spline smooths the path and creates evenly spaced line segments along the path.

Module Input Ports

  • Input Field [Field] Accepts a 3D polyline field

Module Output Ports

  • Output Data [Field] Outputs the splined lines
  • Output Object [Renderable]: Outputs to the viewer

triangulate_polygons

triangulate_polygons converts a closed polyline into a triangulated surface. This surface can be extruded or used by the distance to 2d area module to perform areal subsetting of 3D models.

Polylines with WIDTH in AutoCAD DWG files are converted by import_cad into triangle strips of the specified width. As you zoom in on polylines with width, the apparent width will change, whereas the apparent width of lines DOES NOT change. However, once they are triangles, they DO NOT define a closed area and therefore would not work with triangulate_polygons.

Module Input Ports

  • Input Field [Field] Accepts a data field representing closed polygon(s).

Module Output Ports

  • Output Field [Field] Outputs the surface(s) field
  • Output Object [Renderable]: Outputs to the viewer.

triangle refinement

triangle refinement is primarily for use with distance to surface. It can subdivide triangular and quadrilateral cells until none of the sides of the output triangles exceed a user specified length (a default value is calculated as 5% of the x-y extent of your input surface). This increases the accuracy of distance to surface especially when the input surface comes from create_tin and the nodes used to create the TIN are poorly spaced. It can also correct the normals of a surface. It does this by organizing all of the triangles and quadrilaterals in a surface into disjoint patches, and then allowing the user to select which patches have normals that need to be flipped. The maximum number of triangles in a patch is 130,000, any triangles above this number will be considered to be in the next patch.

Removing small cells is used to remove extremely small cells (based on area in your coordinate units squared) that sometimes are generated with CAD triangulation routines that might have their normal vectors reversed and would contribute to poor cutting surface definition. Try this option if you find that distance to surface is giving anomalous results.

The maximum edge length allows the maximum length of each triangle side to be set for when the Split Cells option is set.

The ability to fix normals is used to check to that all of the triangles in selected patches of the surface have the same normal vector direction. If the normal is backwards, you can flip the normal of the patch in two ways. The first way is Alt + Right click on a cell in the patch that you wish to flip and then click the Add patch to flip list button. You only need to do this for one cell in each patch. Another way to do this is to set the Cell ID and Cell Data value of a cell in the patch you wish to flip. The Cell Id and Cell Data values must be obtained from the surface being output from triangle refinement, and not the surface being input.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the refined grid.
  • Sample Data [Renderable]: Outputs to the viewer.

tubes

The tubes module is used to produce open or closed tubes of constant or data dependent radius using 3D lines or polylines as input. Tube size, number of sides and data dependent coloring is possible.

Rotation of the tubes are done with the Phase slider (or type-in), which is specified in degrees. There are two methods used to maintain continuity of the tube orientation as the path meanders along a 3D path. These are specified as the Phase Determination method:

  • Force Z Up: is the default and is most appropriate for paths that stay relatively horizontal. This option keeps the tube faces aligned with the Z axis and therefore with a slope of 30 degrees, the effective cross sectional area of the tube would be reduced by cos(30) which would be a 14% reduction. However for the typical slopes found with tunneling this effect is quite minimal and this option keeps the tube perfectly aligned.
  • Perpendicular Extrusions: keeps the tube cross-section aligned with the tube (extrusion) path and therefore preserves the cross-section no matter what the path. However, tube rotation creep is possible.

Module Input Ports

  • Input Field [Field] Accepts a field with or without data containing lines which represent the paths of the tubes.

Module Output Ports

  • Output Field [Field] Outputs the field as tubes.
  • Output Object [Renderable]: Outputs to the viewer.

volumetric_tunnel

The volumetric_tunnel module allows you to create a volumetric tunnel model that is defined by a polygonal surface cross-section along a complex 3D path. Once this volumetric grid is defined, it can be used as input to various modules to map analyte and/or geologic data onto the tunnel. These include:

  • 3d estimation: external grid port: to map analytical data
  • lithologic modeling: external grid port: to map lithologic data
  • interp_data to map analytical data
  • interp_cell_data: to map stratigraphic or lithologic material data

The requirements for the tunnel path and cross-section are:

  • The path must be defined by a line input to the Right input port.
  • The tunnel cross-section is defined by a surface input to the Left input port.
    • The cross-section should be defined in the X-Y plane at Z = 0 (2D)
    • The coordinates (size) of the cross-section should be actual scale in the same units as the tunnel path (generally feet or meters).
      • Do not use cm for cross-section and meters for path.
      • Generally, the X-Y Origin (0, 0) should lie within the cross-section and should represent where the tunnel path should be.

cross_section_tubes

The cross_section_tubes module is used to produce open or closed tubes of user defined cross-section and constant or data dependent radius using 3D lines or polylines as input for the centerline and a single 2D polyline as the cross-section of the tubes.

Module Input Ports

  • Input Field [Field] Accepts a field with or without data containing lines which represent the paths of the tubes.
  • Input Cross Section Field [Field] Accepts a field which has the cross-section of the tubes.

Rotation of the cross-section is done with the Phase slider (or type-in), which is specified in degrees. There are two methods used to maintain continuity of the tube orientation as the path meanders along a 3D path. These are specified as the Phase Determination method:

  • Force Z Up: is the default and is most appropriate for paths that stay relatively horizontal. This option keeps the tube cross-section aligned with the Z axis and therefore with a slope of 30 degrees, the effective cross sectional area of the tube would be reduced by cos(30) which would be a 14% reduction. However for the typical slopes found with tunneling this effect is quite minimal and this option keeps the tube perfectly aligned.
  • Perpendicular Extrusions: keeps the tube cross-section aligned with the tube (extrusion) path and therefore preserves the cross-section no matter what the path. However, cross-section rotation creep is possible.

The cross section field input must be a closed polyline that is drawn in the X-Y plane in the correct size. It should be balanced about the origin in X, usually with the Y axis (X=0) at the floor of the tunnel. This results in the tunnel being created such that the tunnel path will be at the centerline FLOOR of the tunnel as shown in the picture below.

This tube was created with an EVS Line File (.elf) that was very simple and is shown below:

LINE

-10 0 0

-10 7 0

-7 10 0

7 10 0

10 7 0

10 0 0

CLOSE

END

As you can see, all of the Z coordinates are zero since they are irrelevant. This shape is balanced about the Y axis and is all Y >= 0

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Output Object [Renderable]: Outputs to the viewer.

extrude

The extrude module accepts any mesh and adds one to the dimensionality of the input by extruding the mesh in the Z direction. The interface enables changing the height scale for extruded cells and extruding by a constant, any nodal or cell data component. This module is often used with the import vector gis module to convert polygonal shapefiles into extruded volumetric cells.

When Node Data Component is chosen, the output cells will be extruded by the Scale Factor times the value of whichever nodal data component is selected on the right. With nodal data extrusion you must select “Positive Extrusions Only” or “Negative Extrusions Only”. Since each node of a triangle or quadrilateral can have different values, it is possible for a single cell to have both positive and negative data values at its nodes. If this type of cell is extruded both directions, the cell topology can become tangled.

For this reason, nodal data extrusions must be limited to one direction. To extrude in both directions, merely use two extrude modules in parallel, one set to positive and the other to negative.

Module Input Ports

  • Input Field [Field] Accepts a field with or without data

Module Output Ports

  • Output Field [Field / Minor] Outputs the field
  • Output Object [Renderable]: Outputs to the viewer

drive_glyphs

The drive_glyph module provides a way to move any object (glyph or object from Read_DXF, etc.) along multiple paths to create a “driving” animation.

Module Input Ports

drive_glyphs has three input ports.

Data passed to the first port is the paths to follow (normally from read_lines).

The second port accepts the glyph or vehicle to drive, usually read in with the read glyph module.

The third port is a float parameter for the position of the glyphs.

Module Output Ports

drive_glyph has three output ports.

The leftmost output port is a float parameter for the position of the glyphs along the input paths.

The center port is the animated glyphs.

The right output port is the animated glyphs in a renderable form for the viewer.

place_glyph

General Module Function

The place_glyph module is used to place a single scalable geometric objects (glyph) at an interactively determined location.

glyphs at nodes

The glyphs at nodes module is used to place geometric objects (glyphs) at nodal locations. The glyphs can be scaled, rotated and colored based on the input data. If the input data is a vector, the glyph can be scaled and rotated to represent the direction and absolute magnitude of the vector field. In a scalar data field, the objects can be scaled based on the magnitude of the scalar. The glyphs can represent the data field of one data component while being colored by another data component. Arrow glyphs are commonly used in vector fields to produce visualizations of the vector field.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with scalar or vector data.
  • Input Glyph [Field] Accepts a field representing the glyphs

Module Output Ports

  • Output Field [Field] Outputs the glyphs
  • Output Object [Renderable]: Outputs to the viewer.

create_fault_surface

The create_fault_surface module creates a 3D grid that is aligned to a specified strike and dip.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field to extract its extent

Module Output Ports

  • Z Scale [Number] Outputs Z Scale (vertical exaggeration) to other modules
  • Output Field [Field / Minor] Outputs the surface
  • Fault Surface [Renderable]: Outputs to the viewer

create_grid

The create_grid module produces a 2D or 3D uniform grid that can be used for any purpose. A typical use is starting points for 3d streamlines or advector. In 2D (default) mode it creates a rectangle of user adjustable grid resolution and orientation. In 3D mode it creates a box (3D grid). The number of nodes will depend on the X, Y & optional Z resolutions as well as the cell type specified.

Module Input Ports

  • Input Field [Field] Accepts a field to extract its extent and properly set the application origin. Do not use this module without input to this field, which can be as simple as post samples.

Module Output Ports

  • Output Field [Field / Minor] Outputs the surface
  • Surface [Renderable]: Outputs to the viewer
  • project onto surface

    project onto surface project onto surface provides a mechanism to drape lines and triangles (surfaces) onto surfaces. Please note that a pseudo-3D object like a building made up of triangle faces will be flattened onto the surface. The 3D nature will not be preserved. Lines and surfaces are subsetted to match the size of the cells of the surface on which the lines are draped. In other words, draped objects will match the surface precisely.

  • transform field

    transform_field The transform_field module is used to translate, rotate or scale the coordinates any field. Uses for this module would be to rotate and translate a modflow or mt3d grid (having a grid origin of 0,0,0) to the actual coordinate system of the modeled area. Module Input Ports Input Field [Field] Accepts a data field. Module Output Ports

  • transform objects

    transform objects transform objects is a special group object that allows all connected objects to be rotated (about a user defined center) and/or translated. This is useful if you wish to move objects that are complex, such as group objects like post_samples or axes and therefore cannot be contained in a single field (blue-black) port. An example of this, would be the axes module. If you wanted an axes with an origin that did not match your data, it could be created separately and moved using the transform objects module.

Subsections of Projection

project onto surface

project onto surface provides a mechanism to drape lines and triangles (surfaces) onto surfaces. Please note that a pseudo-3D object like a building made up of triangle faces will be flattened onto the surface. The 3D nature will not be preserved. Lines and surfaces are subsetted to match the size of the cells of the surface on which the lines are draped. In other words, draped objects will match the surface precisely.

Module Input Ports

  • Input Geologic Field [Field] Accepts a geologic field
  • Input Lines [Field] Accepts a field with the lines to be draped

Module Output application-network.md#portsPorts

  • Output Field [Field] Outputs the draped lines
  • Surface [Renderable]: Outputs the draped lines to the viewer.

transform_field

The transform_field module is used to translate, rotate or scale the coordinates any field. Uses for this module would be to rotate and translate a modflow or mt3d grid (having a grid origin of 0,0,0) to the actual coordinate system of the modeled area.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the transformed field.
  • Output Object [Renderable]: Outputs to the viewer.

transform objects

transform objects is a special group object that allows all connected objects to be rotated (about a user defined center) and/or translated. This is useful if you wish to move objects that are complex, such as group objects like post_samples or axes and therefore cannot be contained in a single field (blue-black) port.

An example of this, would be the axes module. If you wanted an axes with an origin that did not match your data, it could be created separately and moved using the transform objects module.

Module Input Ports

  • Input Objects [Field] Accepts any number red object ports from modules to be grouped and transformed

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer.

Limitations

  • The transform objects modules does not change the coordinates that you will see when you probe.

    • We consider this module’s primary purpose to be visualization.
    • We most often use it to display a copy of an existing object in the application. In situations like this we want to retain the original coordinates.
  • In some circumstances transform objects cannot be used with 4DIMs. It can cause the 4DIM extents to be different than they were in the EVS viewer. This has been noted when doing rotations.

  • In most cases, the transform_field module can be used instead, however it does not allow for multiple objects to be connected to its input.

  • overlay aerial

    overlay_aerial The overlay_aerial module will take as input a field and then map an image onto the horizontal areas of the grid. The image can be projected from one coordinate system to another. It can also be georeferenced if it has an accompanying All vertical surfaces (Walls) can be included in the output but will not have image data mapped to them.

  • texture cross section

    texture_cross_section allows you to apply images along a complex non-linear cross section (cross-section) path and compensate for the image scale an

  • texture cell sets

    texture cell sets The texture cell sets module will texture multiple images onto a field based on the geologic data in the field. Module Input Ports Input Field [Field] Accepts a data field. Module Output Ports Output Object [Renderable]: Outputs to the viewer. Properties and Parameters The Properties window is arranged in the following groups of parameters: Properties: controls the placement and scale of the textures Image Processing: allows for the alteration of the image brightness, contrast, etc.

  • texture walls

    texture_walls General Module Function: The texture_walls module provides a means to project an image onto surfaces such as walls of buildings to add more realism to your visualizations. Module Input Ports Input Field [Field] Accepts a data field. Module Output Ports Output Object [Renderable]: Outputs to the viewer. Properties and Parameters The Properties window is arranged in the following groups of parameters:

  • export georeferenced image

    export georeferenced image This module will output a image in one of the following formats: BMP; TIF; JPG; and PNG. It will also output a world file that will allow the image to be placed correctly in applications that allow georeferencing. Module Input Ports Objects [Renderable]: Receives one or more renderable objects similar to the viewer

  • fly through

    fly_through fly_through is an animation module which facilitates controlling the viewer or creating an animation in which the view follows a complex 3D path: on, through, or around your model. The method by which this module controls fly-throughs allows the user to pause at any time and interact with the model using their mouse or the Az-Inc panel.

  • texture sphere

    texture_sphere texture_sphere provides a means to (texture map) project images onto a sphere.

  • texture cylinder

    texture_cylinder texture_cylinder provides a means to (texture map) project images onto a cylinder.

  • read eft

    The read eft module provides a mechanism to open saved OBJ file sets which require multiple files (geometry and textures) as a single file. This is

Subsections of Image

overlay_aerial

The overlay_aerial module will take as input a field and then map an image onto the horizontal areas of the grid. The image can be projected from one coordinate system to another. It can also be georeferenced if it has an accompanying All vertical surfaces (Walls) can be included in the output but will not have image data mapped to them.

Note: If you need to georeference your image or adjust the georeferencing, you can do so with the Georeference Image Tool on the Tool Tab

Module Input Ports

  • Input Field [Field] Accepts a data field.
  • Filename [String] The image filename

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.
  • Filename [String] The image filename
  • Output Object [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the placement of the texture image
  • Wall Properties: controls how walls are viewed
  • Image Processing: allows for the alteration of the image brightness, contrast, etc.

Image Quality: This selector limits the max resolution of the image being read. Most graphics cards support the High resolution of 2048, but relatively few support 4096 and only professional level cards and some of the newest DirectX 10 cards support 8192. Obviously higher resolution images will take more memory and more time to read, but will look much better when zoomed in.

Georeferencing Method: There are 8 different texture mapping modes as follows:

  1. Map to Min/Max - Map image to the min/max extents of the input surface, or a user-defined value (can be typed into overlay_aerial directly).

  2. Translate - Translate the image. Only requires a single GCP. No rotation or scaling is performed.

  3. 2 pt: Trans./Rot. - Translate, Scale, and rotate the image. The image scaling is always the same in X&Y. Only a valid option if you have 2 GCP points. Good option if you only know 2 GCP points, and they are co-linear or near co-linear.

  4. Translate/Scale - Translate and scale the image. Scale in X and Y are not the same. This keeps the image orthorectified. Can be used with 2 or more GCP points.

  5. Affine - Perform a full affine transformation (1st order transformation) on the image. Requires a world file or 3 or more GCP points (from a gcp file). This is the default option which can be fully described with a World File.

  6. 2nd Order - Perform a 2nd order polynomial transformation. This requires 6 or more GCP points (from a gcp file). It will map straight lines in the image into arcs. Allows an image that was georeferenced previously into LAT/LON coordinates to be “straightened” out and handled correctly. This can also be used to adjust for minor problems in the image due to topography. This option cannot be described with a World File because it uses a second order polynomial with more terms than are available in a world file. It requires the use of a GCP file.

  7. 3rd Order - Perform a 3rd order polynomial transformation. Requires 10 or more GCP points. Allows you to adjust for drift in the image, “wedge” shaped photography, and more.

  8. 4th Order - Perform a 4th order polynomial transformation. Requires 15 or more GCP points. Allows adjustments to be made where different portions of the image move in opposite directions. Requires many GCP points to use effectively.

Image Processing: These options allow for the adjustment of image brightness, sharpness, etc..

Image Projection Options: This toggle allows for the reprojection of the image. Each coordinate system is divided into either Geographic or Projected coordinate systems. The coordinate system types are navigated by selecting the appropriate system type in the far left window. When a general coordinate system has been selected a specific coordinate system can be selected from the center window. If there are any details regarding the selected specific coordinate system, they will appear in the text window on the right. A specific coordinate system must be selected both to project from and to project to, and then the Project Image toggle must be turned on.

texture_cross_section allows you to apply images along a complex non-linear cross section (cross-section) path and compensate for the image scale and registration points at various points along the fence path.

This functionality provides the mechanism to accurately apply hand-drawn cross-sections to 3D fence diagrams. When combined in an application with edit_horizons, texture_cross_section allows you to modify your 3D stratigraphic geology to accurately match your hand-drawn cross-sections.

texture cell sets

The texture cell sets module will texture multiple images onto a field based on the geologic data in the field.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the placement and scale of the textures
  • Image Processing: allows for the alteration of the image brightness, contrast, etc.

texture_walls

General Module Function:

The texture_walls module provides a means to project an image onto surfaces such as walls of buildings to add more realism to your visualizations.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Properties: controls the placement and scale of the texture
  • Image Processing: allows for the alteration of the image brightness, contrast, etc.

export georeferenced image

This module will output a image in one of the following formats: BMP; TIF; JPG; and PNG. It will also output a world file that will allow the image to be placed correctly in applications that allow georeferencing.

Module Input Ports

  • Objects [Renderable]: Receives one or more renderable objects similar to the viewer

fly_through

fly_through is an animation module which facilitates controlling the viewer or creating an animation in which the view follows a complex 3D path:

  • on,
  • through, or
  • around your model.

The method by which this module controls fly-throughs allows the user to pause at any time and interact with the model using their mouse or the Az-Inc panel.

Az-Inc parameters (azimuth, elevation, scale, field of view, rotation/scaling center, etc.) are updated by fly_through in real time. This can be seen by running fly_through with the Az-Inc window open. However, please note that this will slow your animation substantially because of the need to continously update the parameters in Az-Inc.

IMPORTANT NOTE: Be sure to TURN OFF “Animate viewer” in the Animator module if you’re controlling fly_through with the Animator.

texture_sphere

texture_sphere  provides a means to (texture map) project images onto a sphere.

texture_cylinder

texture_cylinder  provides a means to (texture map) project images onto a cylinder.

The read eft module provides a mechanism to open saved OBJ file sets which require multiple files (geometry and textures) as a single file. This is required in order to Package Files which is a requisite step in the creation of EVS Presentations.

  • read tcf

    read_tcf The read_tcf module is specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations. The read_tcf module creates a field using a Time Control File (.TCF) to specify the date/time, field and corresponding data component to read (in netCDF, Field or UCD format), for each time step of a time_data field. All file types specified in the TCF file must be the same (e.g. all netCDF or all UCD). The same file can be repeated, specifying different data components to represent different time steps of the output.

  • read multi tcf

    read_multi_tcf The read_multi_tcf module is one of a limited set of Time_Data modules. These modules are specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations.

  • time value

    time_value The time_value module is used to parse a TVF file consisting of dates, values, and (optional) labels. The starting and end dates are read from the file and the controls can be used to interpolate the values to the date and time of interest. Module Input Ports Date [Number] Accepts a date Module Output Ports Start Date [Number] Outputs the starting date End Date [Number] Outputs the ending date Date [Number] Output date Current Date and Time Label [String] Resulting string for the output date Current Date and Time Value [Number] Resulting value for the output date TVF File Format

  • time horizon

    time horizon The time horizon module allows you to extract a surface from a set of time-based surfaces. The time for the extracted surface can be any time between the start and end of the surface set. It will interpolate between adjacent known times.

  • time loop

    time_loop General Module Function The time_loop module is one of a limited set of Time_Data modules. These modules are specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations.

Subsections of Time

read_tcf

The read_tcf module is specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations.

The read_tcf module creates a field using a Time Control File (.TCF) to specify the date/time, field and corresponding data component to read (in netCDF, Field or UCD format), for each time step of a time_data field. All file types specified in the TCF file must be the same (e.g. all netCDF or all UCD). The same file can be repeated, specifying different data components to represent different time steps of the output.

read_tcf effectively includes internal interpolation between appropriate pairs of the files/data_components specified in the TCF file. Its internal structure only requires reading two successive time steps rather than the complete listing of time steps normally represented in a time_data field.

Module Input Ports

  • Date [Number] Accepts a date

Module Output Ports

  • Start Date [Number] Outputs the starting date
  • End Date [Number] Outputs the ending date
  • Date [Number] Output date
  • Output Field [Field] Outputs the data field
  • TCF File

    TCF File Format and Example The listing below is the full contents of the Time Control File control_tce_cdf.tcf. Blank lines or any lines beginning with a “#” are ignored. Valid lines representing time steps must be in order of ascending time and consisting of: a) a date and/or time in Windows standard format b) a file name with an absolute path or just the filename (if the data files are in the same directory as the TCF file). This is not a true relative path (..\file.cdf and subdir\file.cdf don’t work, but file.cdf does), but gives some of the relative path abilities.

Subsections of read tcf

TCF File Format and Example

The listing below is the full contents of the Time Control File control_tce_cdf.tcf. Blank lines or any lines beginning with a “#” are ignored. Valid lines representing time steps must be in order of ascending time and consisting of:

a) a date and/or time in Windows standard format

b) a file name with an absolute path or just the filename (if the data files are in the same directory as the TCF file). This is not a true relative path (..\file.cdf and subdir\file.cdf don’t work, but file.cdf does), but gives some of the relative path abilities.

c) the data component to use for that time step. (You can specify -1 in the third column, which causes ALL the data components to pass through.)

NOTE: These three items on each line must be separated with a comma “,”.

This file contains the list of control commands for the

TCE time data in netCDF format.

The format is a date/time, then the file, then the nodal data component.

The END on the last line is optional.

Each line MUST be comma delimited

(since spaces can exist in the time and filename)

6/1/1990 12:00 AM, $XP_PATH<0>/data/netcdf/time_data/tce_01.cdf, 0

12/1/1990, $XP_PATH<0>/data/netcdf/time_data/tce_02.cdf, 0

2/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_03.cdf, 0

5/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_04.cdf, 0

8/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_05.cdf, 0

11/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_06.cdf, 0

3/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_07.cdf, 0

6/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_08.cdf, 0

10/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_09.cdf, 0

3/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_10.cdf, 0

4/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_11.cdf, 0

8/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_12.cdf, 0

12/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_13.cdf, 0

3/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_14.cdf, 0

6/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_15.cdf, 0

9/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_16.cdf, 0

11/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_17.cdf, 0

3/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_18.cdf, 0

5/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_19.cdf, 0

8/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_20.cdf, 0

10/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_21.cdf, 0

1/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_22.cdf, 0

5/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_23.cdf, 0

9/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_24.cdf, 0

11/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_25.cdf, 0

12/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_26.cdf, 0

3/1/1997 12:00 AM, $XP_PATH<0>/data/netcdf/time_data/tce_27.cdf, 0

6/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_28.cdf, 0

9/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_29.cdf, 0

12/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_30.cdf, 0

3/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_31.cdf, 0

6/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_32.cdf, 0

9/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_33.cdf, 0

11/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_34.cdf, 0

5/1/1999, $XP_PATH<0>/data/netcdf/time_data/tce_35.cdf, 0

10/1/1999, $XP_PATH<0>/data/netcdf/time_data/tce_36.cdf, 0

3/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_37.cdf, 0

7/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_38.cdf, 0

11/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_39.cdf, 0

3/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_40.cdf, 0

5/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_41.cdf, 0

10/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_42.cdf, 0

END

read_multi_tcf

The read_multi_tcf module is one of a limited set of Time_Data modules. These modules are specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations.

The read_multi_tcf module creates a field using one or more Time Control Files (.TCF). Click here for an example of a TCF fileand a description of the format.

The read_multi_tcf module creates a mesh grid with the interpolated data from a user specifed number of TCF files (n). It outputs the first data component from the first (n-1) TCF files and all of the time interpolated data components from the nth TCF file.

For example, if you were trying to create a time animation of the union of 3 analytes (e.g. Benzene, Toluene & Xylene), read_multi_tcf allows you to select all three separate TCF files. Only the first data component from Benzene.tcf (nominally the concentration of benzene) is output as the new first data component. The first data component from Toluene.tcf (nominally the concentration of toluene) is output as the new second data component. All of the data components from Xylene.tcf are then output (typically xylene, confidence_xylene, uncertainty_xylene, Geo_Layer, Material_ID, Elevation, etc.). This allows you to explode layers and do other typical subsetting and processing operations on the output of this module.

The TCF files should be created using identical grids with date ranges that overlap the time period of interest.

read_multi_tcf effectively includes an inter_time_step module internally in that it performs the interpolation between appropriate pairs of the files/data_components specified in the TCF file. Its internal structure only requires reading two successive time steps rather than the complete listing of time steps normally represented in a time_data field.

  • TCF File

    TCF File Format and Example The listing below is the full contents of the Time Control File control_tce_cdf.tcf. Blank lines or any lines beginning with a “#” are ignored. Valid lines representing time steps must be in order of ascending time and consisting of: a) a date and/or time in Windows standard format b) a file name with an absolute path or just the filename (if the data files are in the same directory as the TCF file). This is not a true relative path (..\file.cdf and subdir\file.cdf don’t work, but file.cdf does), but gives some of the relative path abilities.

Subsections of read multi tcf

TCF File Format and Example

The listing below is the full contents of the Time Control File control_tce_cdf.tcf. Blank lines or any lines beginning with a “#” are ignored. Valid lines representing time steps must be in order of ascending time and consisting of:

a) a date and/or time in Windows standard format

b) a file name with an absolute path or just the filename (if the data files are in the same directory as the TCF file). This is not a true relative path (..\file.cdf and subdir\file.cdf don’t work, but file.cdf does), but gives some of the relative path abilities.

c) the data component to use for that time step. (You can specify -1 in the third column, which causes ALL the data components to pass through.)

NOTE: These three items on each line must be separated with a comma “,”.

This file contains the list of control commands for the

TCE time data in netCDF format.

The format is a date/time, then the file, then the nodal data component.

The END on the last line is optional.

Each line MUST be comma delimited

(since spaces can exist in the time and filename)

6/1/1990 12:00 AM, $XP_PATH<0>/data/netcdf/time_data/tce_01.cdf, 0

12/1/1990, $XP_PATH<0>/data/netcdf/time_data/tce_02.cdf, 0

2/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_03.cdf, 0

5/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_04.cdf, 0

8/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_05.cdf, 0

11/1/1991, $XP_PATH<0>/data/netcdf/time_data/tce_06.cdf, 0

3/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_07.cdf, 0

6/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_08.cdf, 0

10/1/1992, $XP_PATH<0>/data/netcdf/time_data/tce_09.cdf, 0

3/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_10.cdf, 0

4/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_11.cdf, 0

8/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_12.cdf, 0

12/1/1993, $XP_PATH<0>/data/netcdf/time_data/tce_13.cdf, 0

3/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_14.cdf, 0

6/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_15.cdf, 0

9/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_16.cdf, 0

11/1/1994, $XP_PATH<0>/data/netcdf/time_data/tce_17.cdf, 0

3/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_18.cdf, 0

5/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_19.cdf, 0

8/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_20.cdf, 0

10/1/1995, $XP_PATH<0>/data/netcdf/time_data/tce_21.cdf, 0

1/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_22.cdf, 0

5/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_23.cdf, 0

9/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_24.cdf, 0

11/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_25.cdf, 0

12/1/1996, $XP_PATH<0>/data/netcdf/time_data/tce_26.cdf, 0

3/1/1997 12:00 AM, $XP_PATH<0>/data/netcdf/time_data/tce_27.cdf, 0

6/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_28.cdf, 0

9/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_29.cdf, 0

12/1/1997, $XP_PATH<0>/data/netcdf/time_data/tce_30.cdf, 0

3/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_31.cdf, 0

6/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_32.cdf, 0

9/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_33.cdf, 0

11/1/1998, $XP_PATH<0>/data/netcdf/time_data/tce_34.cdf, 0

5/1/1999, $XP_PATH<0>/data/netcdf/time_data/tce_35.cdf, 0

10/1/1999, $XP_PATH<0>/data/netcdf/time_data/tce_36.cdf, 0

3/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_37.cdf, 0

7/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_38.cdf, 0

11/1/2000, $XP_PATH<0>/data/netcdf/time_data/tce_39.cdf, 0

3/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_40.cdf, 0

5/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_41.cdf, 0

10/1/2001, $XP_PATH<0>/data/netcdf/time_data/tce_42.cdf, 0

END

time_value

The time_value module is used to parse a TVF file consisting of dates, values, and (optional) labels. The starting and end dates are read from the file and the controls can be used to interpolate the values to the date and time of interest.

Module Input Ports

  • Date [Number] Accepts a date

Module Output Ports

  • Start Date [Number] Outputs the starting date
  • End Date [Number] Outputs the ending date
  • Date [Number] Output date
  • Current Date and Time Label [String] Resulting string for the output date
  • Current Date and Time Value [Number] Resulting value for the output date
  • TVF File Format

    TVF files provide a way to generate a time varying numeric and option string (label). The file is similar to the TCF file, but does not reference info

Subsections of time value

TVF files provide a way to generate a time varying numeric and option string (label). The file is similar to the TCF file, but does not reference information in external files.

The file consists of two or more rows, each having 2 or 3 columns of information. The columns must contain:

  1. Date and/or time in Windows standard format
  2. A numeric (float) value (required)
  3. A string consisting of one or more words. These need not be in quotes. Everything on the row after the numeric value will be used. (optional)

Dates must be in order from earliest to latest and not repeating. Only the label column is optional.

An example file follows:
 
06/01/12	-1.63   Spring Rains
06/04/12	-1.87
06/07/12	-2.17
06/10/12	-1.87
06/13/12	-1.9
06/16/12	-2.2
06/19/12	-1.9
06/22/12	-1.96   Summer
06/25/12	-1.81
06/28/12	-1.84
07/01/12	-1.69
07/04/12	-1.39
07/07/12	-1.33
07/10/12	-1.12
07/13/12	-0.85
07/16/12	-1.03
07/19/12	-1.06
07/22/12	-0.76
07/25/12	-0.61	Flood Event
07/28/12	-0.31
07/31/12	-0.31
08/03/12	-0.52
08/06/12	-0.37
08/09/12	-0.61
08/12/12	-0.85
08/15/12	-0.79
08/18/12	-0.76
08/21/12	-0.58
08/24/12	-0.64
08/27/12	-0.49
08/30/12	-0.46
09/02/12	-0.67
09/05/12	-0.91
09/08/12	-0.82
09/11/12	-1.09	""
09/14/12	-1.27
09/17/12	-1.3
09/20/12	-1.33
09/23/12	-1.51   Fall
09/26/12	-1.42
09/29/12	-1.69
10/02/12	-1.69
10/05/12	-1.78
10/08/12	-1.84
10/11/12	-1.96
10/14/12	-2.17
10/17/12	-2.29
10/20/12	-2.26
10/23/12	-2.05
10/26/12	-2.05
10/29/12	-1.84
11/01/12	-2.05
11/04/12	-2.23
11/07/12	-2.08
11/10/12	-2.2
11/13/12	-2.41
11/16/12	-2.62
11/19/12	-2.83
11/22/12	-2.62
11/25/12	-2.5
11/28/12	-2.29
12/01/12	-2.11
12/04/12	-2.2
12/07/12	-1.9
12/10/12	-2.08
12/13/12	-1.93
12/16/12	-1.81
12/19/12	-1.75
12/22/12	-1.63   Winter
12/25/12	-1.36
12/28/12	-1.45
12/31/12	-1.24
01/03/13	-1.21
01/06/13	-1
01/09/13	-1.27
01/12/13	-1.21
01/15/13	-1.18
01/18/13	-1.15
01/21/13	-1.12
01/24/13	-1.33
01/27/13	-1.39
01/30/13	-1.24
02/02/13	-1.3
02/05/13	-1.57
02/08/13	-1.66
02/11/13	-1.81
02/14/13	-1.69
02/17/13	-1.78
02/20/13	-1.78
02/23/13	-1.84
02/26/13	-1.72
03/01/13	-2.02
03/04/13	-2.23
03/07/13	-2.08
03/10/13	-2.02
03/13/13	-2.32
03/16/13	-2.11
03/19/13	-2.41
03/22/13	-2.65   Spring
03/25/13	-2.38
03/28/13	-2.47
03/31/13	-2.47
04/03/13	-2.32
04/06/13	-2.17
04/09/13	-2.14
04/12/13	-2.41
04/15/13	-2.65
04/18/13	-2.47
04/21/13	-2.35
04/24/13	-2.32
04/27/13	-2.38
04/30/13	-2.08
05/03/13	-1.93
05/06/13	-1.84
05/09/13	-1.57
05/12/13	-1.84
05/15/13	-1.57
05/18/13	-1.57
05/21/13	-1.69
05/24/13	-1.93
05/27/13	-1.78
05/30/13	-1.57
06/02/13	-1.84

time horizon

The time horizon module allows you to extract a surface from a set of time-based surfaces. The time for the extracted surface can be any time between the start and end of the surface set. It will interpolate between adjacent known times.

time_loop

General Module Function

The time_loop module is one of a limited set of Time_Data modules. These modules are specifically designed to create models and animations of data that changes over time. This type of data can result from water table elevation and/or chemical measurements taken at discrete times or output from Groundwater simulations or other 3D time-domain simulations.

The time_loop module allows you to loop through a series of times or specify a time for interpolation from a time field.

  • group objects

    group objects group objects is a renderable object that contains other subobjects that have the attributes that control how the rendering is done. Unlike DataObject, group objects does not include data. Instead, it is meant to be a node in the rendering hierarchy that groups other DataObjects together and supplies common attributes from them. This object is connected directly to one of the viewers (for example, Simpleviewer3D) or to another DataObject or to group objects. A group objects is included in all the standard viewers provided with the EVS applications chooses.

  • group objects to 2d overlay

    group objects to 2d overlay The group objects to 2d overlay moduleprovides a module that applies any connected module’s output to the viewer’s 2D overlay. Objects in the overlay are not transformed (rotated, zoomed, panned). These objects are locked in position. This provides a mechanism to apply graphics like drawing title blocks or company logos. However, you must ensure that the object sent to the 2D overlay fits inside its limited spatial extent. The 2D overlay is a window with an x-extent from -1.0 to 1.0. The y-extent is dependent on the aspect ratio of the viewer. With a default viewer having a 4:3 aspect ratio, it is three-quarters of the x-extent (e.g. -0.75 to 0.75).

  • trigger script

    trigger_script The trigger_script module provides a powerful way to link parameters and actions of multiple modules. This gives you the ability for a sequence of events to be “triggered” as the result of one or more parameters changing. The modules requires a Python script be created, which runs when you “Add” triggers. Triggers are module parameters that might change and thereby cause the script to be run. The script can do just about ANYTHING.

  • merge fields

    merge_fields merge_fields combines the input fields from up to 4 separate inputs into a unified single field with any number of nodal data components, which can be output to other modules (for processing), OR directly to the viewer. This is useful when you want to slice through or otherwise subset multiple fields using the same criteria (modules).

  • float math

    float_math This module provides a simple means to perform mathematical operations on numbers coming from up to 4 input ports. By using multiple float_math modules, any number of values may be combined. The panel for float_math is shown above. The default equation is f1 + f2 + f3 + f4 which adds all four input ports.

  • create tin

    create_tin The create_tin module is used to convert scattered sample data into a three-dimensional surface of triangular cells representing an unstructured mesh. “Scattered sample data " means that there are discrete nodes in space. An example would be geology or analyte (e.g. chemistry) data where the coordinates are the x, y, and elevation of a measured parameter. The data is “scattered” because there is not necessarily an implicit grid of data.

  • material to cellsets

    material_to_cellsets material_to_cellsets is intended to receive a 3D field into its input port which has been processed through a module like plume. If the original field (pre-plume) had multiple cell sets related to geologic units or materials the output of plume will generally have only two cell sets which comprise all hexahedron and all tetrahedron cells. The ability to control the visibility of the layer-cell sets is normally lost.

  • loop

    loop The loop module iterates an operation. For example, you could use a loop object to control the movement of an object in your application; such as incrementing the movement of a slider for a slice plane.

  • modify data 3d

    modify_data_3d The modify_data_3d module provides the ability to interactively change data in 3D volumetric models. This is not a recommended practice since volumetric models created in EVS generally have underlying statistical measures of quality that will be meaningless if the data is modified in any way. However, it is not unusual for a model to occasionally have regions where extrapolation artifacts cause shards of plumes to appear. This module provides a way to remove those.

  • create mask

    Delete this text and replace it with your own content.

Subsections of Tools

group objects

group objects is a renderable object that contains other subobjects that have the attributes that control how the rendering is done. Unlike DataObject, group objects does not include data. Instead, it is meant to be a node in the rendering hierarchy that groups other DataObjects together and supplies common attributes from them. This object is connected directly to one of the viewers (for example, Simpleviewer3D) or to another DataObject or to group objects. A group objects is included in all the standard viewers provided with the EVS applications chooses.

Module Input Ports

  • Input Objects [Field] Accepts any number red object ports from modules to be grouped

Module Output Ports

  • Output Object [Renderable]: Outputs to the viewer.

group objects combines the following:

* DefaultDatamap to convert scalar node or cell data to RGB color values. By default, the datamap’s minimum and maximum values are 0 and 255, respectively. This datamap is inherited by any children objects if they do not have their own datamaps.

* DefaultProps to control color, material, line attribute, and geometrical attributes.

* DefaultModes to control point, line, surface, volume, and bounds rendering modes.

* DefaultPickInfo to contain information when this object is picked.

* DefaultObject to control visibility, pickability, caching, transform mode, surface conversion, and image display attributes.

group objects to 2d overlay

The group objects to 2d overlay moduleprovides a module that applies any connected module’s output to the viewer’s 2D overlay. Objects in the overlay are not transformed (rotated, zoomed, panned). These objects are locked in position. This provides a mechanism to apply graphics like drawing title blocks or company logos.

However, you must ensure that the object sent to the 2D overlay fits inside its limited spatial extent. The 2D overlay is a window with an x-extent from -1.0 to 1.0. The y-extent is dependent on the aspect ratio of the viewer. With a default viewer having a 4:3 aspect ratio, it is three-quarters of the x-extent (e.g. -0.75 to 0.75).

trigger_script

The trigger_script module provides a powerful way to link parameters and actions of multiple modules. This gives you the ability for a sequence of events to be “triggered” as the result of one or more parameters changing.

The modules requires a Python script be created, which runs when you “Add” triggers. Triggers are module parameters that might change and thereby cause the script to be run. The script can do just about ANYTHING.

In addition to the Triggers that you specify, there are 4 input (and output) ports that accept numbers (such as a plume level) that can be used in your script, and are more readily accessible without accessing the Python script.

Module Inpu & Output Ports

  • N1 [Number] Accepts a number
  • N2 [Number] Accepts a number
  • N3 [Number] Accepts a number
  • N4 [Number] Accepts a number

merge_fields

merge_fields combines the input fields from up to 4 separate inputs into a unified single field with any number of nodal data components, which can be output to other modules (for processing), OR directly to the viewer. This is useful when you want to slice through or otherwise subset multiple fields using the same criteria (modules).

You must be aware that fields contain more than just grids and data. They contain meta-data set during the creation of those grids and data, including, but not limited to:

  • Data Processing (log or linear)
  • Coordinate units
  • Data units (mg/kg or %)
  • Data Min and Max values (ensures that datamaps from kriging match datamaps in post samples)

NOTE: There are potential dangers and serious consequences of merging fields because we allow for merging of data without requiring strict name or meta data matching.

  • Meta data from the leftmost input field is always used for the merged result.

  • You can only merge fields having the same number of nodal and/or cell data components.

  • We do not require strict name matching, therefore it is possible to merge data with very negative consequences. Examples are:

    • Benzene data from one input field with Toluene from another field.
    • Log Processed TPH data with linear processed TPH data.
    • One field with coordinate units of meters with another in feet.
  • Overlapping Volumes: When you merge fields you must be aware that this is not an alternative way to create the union of multiple plumes.

    • The merge fields modules does not remove overlapping volumes.
    • Volume calculations with volumetrics can count overlapping regions multiple times giving nonsensical values.

The Merge Cell Sets When Possible option works only if you have matching types and names. A good, and appropriate example is merging fault blocks so that all “Clay” cell sets are controlled as a single entity.

Module Input Ports

  • First Input Field [Field] Accepts a data field.
  • Second Input Field [Field] Accepts a data field.
  • Third Input Field [Field] Accepts a data field.
  • Fourth Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the field with all inputs merged
  • Output Object [Renderable]: Outputs to the viewer.

float_math

This module provides a simple means to perform mathematical operations on numbers coming from up to 4 input ports. By using multiple float_math modules, any number of values may be combined.

The panel for float_math is shown above. The default equation is f1 + f2 + f3 + f4 which adds all four input ports.

Pop-upAvailable Mathematical Operators hereorJump to Available Mathematical Operators here

Any of these operators may be used.

The output (rightmost output port) is the numeric value resulting from the equation.

The value will update when any of the input values are changed unless the checkbox next to the input value is turned off.

Module Input Ports

  • Input Value1 [Number] Accepts number 1
  • Input Value 2 [Number] Accepts number 2
  • Input Value 3 [Number] Accepts number 3
  • Input Value 4 [Number] Accepts number 4
  • Input String 1 [String] An input string

Module Output Ports

  • Output Value1 [Number] Outputs number 1
  • Output Value 2 [Number] Outputs number 2
  • Output Value 3 [Number] Outputs number 3
  • Output Value 4 [Number] Outputs number 4
  • Output String 1 [String] An input string
  • Result Value [Number] The final output

create_tin

The create_tin module is used to convert scattered sample data into a three-dimensional surface of triangular cells representing an unstructured mesh.

“Scattered sample data " means that there are discrete nodes in space. An example would be geology or analyte (e.g. chemistry) data where the coordinates are the x, y, and elevation of a measured parameter. The data is “scattered” because there is not necessarily an implicit grid of data.

create_tin uses a proprietary version of the Delaunay tessellation algorithm.

Module Input Ports

  • Input Points [Field] Accepts a data field of points or uses the nodes (points) from lines

Module Output Ports

  • Output Field [Field] Outputs the surface data field
  • Output Object [Renderable]: Outputs to the viewer.

material_to_cellsets

material_to_cellsets is intended to receive a 3D field into its input port which has been processed through a module like plume. If the original field (pre-plume) had multiple cell sets related to geologic units or materials the output of plume will generally have only two cell sets which comprise all hexahedron and all tetrahedron cells. The ability to control the visibility of the layer-cell sets is normally lost.

This module takes plume’s output and recreates the cell sets based on nodal data. However, since each geologic layer will likely have two cell sets each (one for all hexahedron and all tetrahedron cells), the output tends to have twice as many cell sets as the original pre-plume field).

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the processed field.

loop

The loop module iterates an operation. For example, you could use a loop object to control the movement of an object in your application; such as incrementing the movement of a slider for a slice plane.

modify_data_3d

The modify_data_3d module provides the ability to interactively change data in 3D volumetric models. This is not a recommended practice since volumetric models created in EVS generally have underlying statistical measures of quality that will be meaningless if the data is modified in any way.

However, it is not unusual for a model to occasionally have regions where extrapolation artifacts cause shards of plumes to appear. This module provides a way to remove those.

The basic approach is to move the modification sphere to the problem region and set the size and shape of the ellipsoid before changing your data.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a data field from 3d estimation or other similar modules.

Module Output Ports

  • Output Field [Field] Outputs the field with modified data
  • Sample Data [Renderable]: Outputs to the viewer

The figure below shows the cloud of points display from this module. Note the adaptively gridded regions with clusters of nodes!

Note: This module does not modify the upstream data.

Delete this text and replace it with your own content.

  • viewer

    viewer The viewer accepts renderable objects from all modules with red output ports to include their output in the view. Module Input Ports Objects [Renderable]: Receives renderable objects from any number of modules Module Output Ports View [View / minor] Outputs the view information used by other modules to provide all model extents or interactivity viewer Properties: The user interfaces for the viewer are arranged in 10 categories which cover interaction with the scene, the characteristics of the viewer as well as various output options.

Subsections of View

viewer

The viewer accepts renderable objects from all modules with red output ports to include their output in the view.

Module Input Ports

  • Objects [Renderable]: Receives renderable objects from any number of modules

Module Output Ports

  • View [View / minor] Outputs the view information used by other modules to provide all model extents or interactivity

viewer Properties:

The user interfaces for the viewer are arranged in 10 categories which cover interaction with the scene, the characteristics of the viewer as well as various output options.

  • These features are all available in the Viewer Properties and many of them are accesible in the Viewer Contents. The categories are:
  1. Properties: includes the ability to set the view (Azimuth, Inclination, Scale, Perspective, etc.), pick objects and probe their data and control how the view scale reacts as new objects or data are added to the scene.
  2. Window Size: sets the size of the viewer. The view has apparent size (the size of the visible window) and the true image size. Outputting a high resolution image involves setting a true image size to match your desired output dimensions.
  3. Output Image: includes the ability to export the view in PNG, BMP, JPG, or TIF format. Additional view scaling options are included.
  4. Distance Tool: provides an interactive means to measure the distance between points in the viewer’s scene and to export the line between two points in C Tech’s ELF format.
  5. Background: sets the style and colors for the background.
    1. The default, 2 color background will be saved in 4DIMs and will display in all output.
    2. Use Unlocked Background for VRML output. Please note that Unlocked Backgrounds are not inherited in a 4DIM and therefore the background can be changed.
  6. View: provides controls for depth sorting.
  7. Lights: provides the ability to control one or more lights in the scene and their properties.
  8. Camera: provides detailed controls over the camera’s interaction with the scene of objects.
  9. Record 4DIM: provides the ability to export the scene in C Tech 4DIM format. Please note that 4DIMs have been officially supplanted by CTWS and will likely be deprecated in late 2024.
  10. Write_VRML: provides the ability to export the scene for 3D printing.

Object Manipulation in the viewer

When the viewer is instanced, it opens a window in which objects connected to the viewer are rendered and can be manipulated. Objects can be transformed and scaled in the viewer window by using combinations of mouse actions and various keys on the keyboard.

  • Rotation of objects in the viewer is accomplished by clicking and dragging on any portion of the viewer window with the left mouse button.
  • Translation of objects in the viewer is accomplished by clicking and dragging on any portion of the viewer window with the right mouse button.
  • Zooming of an object in the viewer is accomplished using the mouse wheel. Alternatively by depressing the Shift button while clicking and dragging the middle-mouse towards the upper right to zoom IN or lower left to zoom OUT.
  • output images

    Output Images The View Scale parameter allows you to specify that your image to be output will be “n” times larger (or smaller if a fraction less than 1.0 is specified) than your current Window Size When the Autoscale FF Font toggle is selected all Forward Facing fonts in the image will be scaled depending upon the size of the output image.

  • Recording (Capturing) 4DIM Files

    Recording (Capturing) 4DIM Files The Record 4DIM output option in the Viewer provides the ability to export in C Tech’s proprietary 4DIM vector animation format. Limitations In some circumstances transform_group cannot be used with 4DIMs. It can cause the 4DIM extents to be different than they were in the EVS viewer. This has been noted when doing rotations. In most cases, the transform_field module can be used instead, however it does not allow for multiple objects to be connected to its input. volume_renderer is not compatible with 4DIMs 4DIM files will not record any object whose cache has been disabled. This occurs when large fields are connected to the viewer. When this occurs (for external_faces in this example), the following message appears in the Status Window: — Warning from: module: external_faces —

  • write vrml

    write_vrml The write_vrml output in the viewer is able to output most graphics objects in the viewer to a VRML-formatted file. VRML is a network transparent protocol for communicating 3D graphics. It has fallen out of favor on the web, though it is still a standard for 3D model output. We provide VRML output for two primary purposes:

Subsections of viewer

Output Images

The View Scale parameter allows you to specify that your image to be output will be “n” times larger (or smaller if a fraction less than 1.0 is specified) than your current Window Size

When the Autoscale FF Font toggle is selected all Forward Facing fonts in the image will be scaled depending upon the size of the output image.

The suffix specified for the Image Filename determines the type of output.

  • For PNG (portable network graphics), a compression slider is provided. The max value of 9 results in a very small increase in compute time for compressing the images. Since PNG is a LOSSLESS compression format, the quality of the image is not affected by this value.
  • For JPEG, a quality parameter is provided. Higher qualities result in less LOSS to the image but create much larger files. We recommend using PNG instead of JPEG whenever possible. The PNG images are often smaller and are always higher quality than a JPEG image.

The Anti Aliasing option renders an image that is twice as big as the specified Width and Height. This high resolution image is then filtered and subsetted to the specified size. This process reduces the brightness (contrast) of fine lines but it also smooths the lines and dramatically reduces jaggies.

The Mask Background toggle allows you to create an image with a transparent background. In order to accomplish this, several things must be done:

  • You must specify an image type that supports transparent backgrounds. PNG is recommended
  • You must have a background color which is unique from any pixels in your objects which are rendered. This can be somewhat difficult if you have a rendered object with shading and specular highlights. Shading creates darker versions of the colors in your datamap and specular highlights creates less saturated (more white) versions of those colors. To avoid creating object colors that match your background, a masking background color should be selected which has a unique HUE not found in your datamap.
  • Anti-Aliasing and filtering will intelligently detect the edges that are transparent and not mix in “pink” edges on your objects.

NOTE: There is no tolerance for matching the background color. The color must match the RGB value exactly.

TIP: The mask background function can be used to create transparent HOLES in your images. For example, a lake, which is rendered as a unique color could become a transparent hole in your rendered output. In order to accomplish this, the object which represents the lake must be colored to exactly match your mask color and it must have its surface rendering set to “Flat Shading”.The Select File button is used to bring up a standard windows file browser for choosing the name and location of the file to create. The Accept Current Values push button begins creation of the file.

Recording (Capturing) 4DIM Files

The Record 4DIM output option in the Viewer provides the ability to export in C Tech’s proprietary 4DIM vector animation format.

Limitations

  • In some circumstances transform_group cannot be used with 4DIMs. It can cause the 4DIM extents to be different than they were in the EVS viewer. This has been noted when doing rotations.
    • In most cases, the transform_field module can be used instead, however it does not allow for multiple objects to be connected to its input.
  • volume_renderer is not compatible with 4DIMs
  • 4DIM files will not record any object whose cache has been disabled. This occurs when large fields are connected to the viewer. When this occurs (for external_faces in this example), the following message appears in the Status Window:

— Warning from: module: external_faces —

Field is too big (140 MB) to be put into GDobject’s cache (128 MB). Drawing the bounds only. Consider increasing the cache size or reducing the field’s complexity.


You will also know this has happened when you see an object in your viewer that is only the white bounds of what SHOULD be displayed. Such as:

When this occurs, the procedure to fix it is:

  1. Select the object using the Choose Object to Edit button the viewer’s Properties.
  2. Increase the cache size from the default value of 128 (Mb) to a larger value.

Operation

When in Manual mode, frames (3D Models) are saved only when the “Record a Single Frame” button is depressed. When in Automatic mode, every time the model is changed a frame is appended the 4DIM animation. The definition of model is changed is not the same as the automatic mode in output_images. For this module, a change is defined as a change to one or more of the 3D objects in the viewer. Merely manipulating the view with Az-Inc or your mouse does not constitute a change. The reason for this is that recording frames that represent viewer manipulations is a waste. 4DIM files can be manipulated exactly the same way you manipulate the viewer. With 4DIM files we only want to save frames that represent changes to the content in the viewer.

Before the 4DIM file is written, you have the option of deleting the last frame (this can be done repeatedly) or clearing all frames. When creating small 4DIMs manually, this can be useful.What is saved?

Some geometries may not display properly when the animation is played back. In particular, volume rendering is not supported.

Geometry that does not change from frame-to-frame is not re-saved. Instead, a reference is made to the previous frame so that data does not need to be duplicated. Invisible objects (visible set to zero) are not captured.

View attributes will not be saved as part of the animation.

Attributes that can be saved

  1. Visibility

  2. Transparency

  3. Most object modes (rendering modes and line modes)

  4. Background color and background type

    1. If Locked 2 or 4 color backgrounds are used, they cannot be changed by the user in the 4DIM player

View, Light and Camera Attributes

The following lists the view attributes you can change.

You can change all view attributes.

All light attributes can be changed.

The following camera attributes can be changed:

lens

clipping plane

depth cueing

write_vrml

The write_vrml output in the viewer is able to output most graphics objects in the viewer to a VRML-formatted file.

VRML is a network transparent protocol for communicating 3D graphics. It has fallen out of favor on the web, though it is still a standard for 3D model output.

We provide VRML output for two primary purposes:

  1. Export of 3D models for conversion to 3D PDF
  2. Export of 3D models for 3D Printing

Known Issues

  • Turn on the “Use Unlocked Background” option in the viewer->Background editor when writing VRML files, since the background is otherwise rendered as a small square at the origin.
  • Always set your viewer to a Top View (180 Azimuth and 90 Inclination) before writing the VRML file.
  • Do not use any modules which display in the 2D overlay. The 2D overlay is analogous to drawing on the glass on a TV or monitor. Items in the 2D overlay do not move, rotate or scale when you manipulate your 3D model. Examples are add_logo, Titles, and legend.
  • Do not use volume rendering. These techniques are not supported.
  • VRML does not support the full spectrum of data coloring supported in EVS.
    • Though both cell and nodal data coloring is supported, sometimes combinations of these cause problems.
    • Object colors (such as the red, blue, green grid lines of the axes module) often revert to white (uncolored). This can be problematic on a white background.
    • The texture_colors module is recommended for final output of most all colored objects to help avoid these issues.
  • Trial and Error is often the only way to determine what combinations of rendering modes are supported, especially for 3D PDF and 3D printing. Remember these vendor’s software all interpret the VRML files in slightly different ways. You will likely not be able to do everything you can do in a 4DIM or in EVS.
  • VRML viewers: There is a list of VRML viewing software published by National Institute of Standards and Technology here. We recommend Cosmo, though it is far from perfect. We have created VRML files which will not display correctly in any of the VRML viewers that we have tested (including Cosmo), but which DO convert to 3D PDF perfectly. Conversely, there are occasions when something will look ok in VRML and not convert properly to 3D PDF.

Module Input Ports

  • View [View] Connects to the viewer to receive all objects in the view
  • Guidelines for 3D PDF Creation

    The following is a list of guidelines that must be considered when making EVS models that will be output as 3D PDF files using the *C Tech 3D PDF Conv

  • Guidelines for 3D Printing

    Guidelines for 3D Printing The following is a list of guidelines that must be considered when making visualizations that will be printed using 3D Systems (previously Zcorp) technology. As of this software release, no other full color 3D printer has been successfully tested with output from write vrml. You must follow the guidelines in write vrml in addition to these additional guidelines.

Subsections of write vrml

The following is a list of guidelines that must be considered when making EVS models that will be output as 3D PDF files using the C Tech 3D PDF Converter.

Note: The C Tech 3D PDF Converter is a separately purchased product not included with any other C Tech software licenses. Please see www.ctech.com for pricing.

EVS output from write_vrml. You must follow the guidelines in write_vrml in addition to these additional guidelines.

Let’s begin by building a simple application

Whose output is:

The first things we MUST do for VRML output are to remove the legend and use an Unlocked Background. If you see a gradient background in your viewer, you definitely aren’t using an unlocked background. Once you use an unlocked background, you can still set a solid (single) background color.

Always set your viewer to a Top View (180 Azimuth and 90 Inclination) before writing the VRML file.

If we output this current model as VRML and convert to 3D PDF,

the results are less than wonderful:

The above 3D PDF has three obvious problems:

  1. The top and bottom of the plume are very dark.
  2. The slice is dark
  3. post_sample’s borings are dark.

We need to modify the application using two texture_colors modules as follows:

You’ll notice that in the revised application, the output in the viewer is virtually identical. This will address the first two problems, however we expect to resolve the dark borings in an upcoming release.

If we export this model to VRML and convert to PDF, the result is:

One other issue is that by default, we create isolines coincident with the surface(s) and resolve the coincidence in EVS using jitter. At some rotations you will notice that the isolines may disappear. This can be because jitter is not supported, but also because the underlying surface is so bright that the lines are not distinguishable.

This can be addressed using the surface_offset parameter in isolines. This will offset the lines from teh surface (in one direction) and eliminate the coincidence. However, this will also mean that the lines will not be visible from one side of the slice. Making the lines uncolored is another option.

Guidelines for 3D Printing

The following is a list of guidelines that must be considered when making visualizations that will be printed using 3D Systems (previously Zcorp) technology. As of this software release, no other full color 3D printer has been successfully tested with output from write vrml. You must follow the guidelines in write vrml in addition to these additional guidelines.

These guidelines are provided to minimize printing problems. Users should fully understand the issues below or they will likely not create VRML files suitable for 3D printing. Given the cost of the raw material it is best to do it right the first time!

Many of these issues (if not heeded) will be obvious when the model is viewed in Z Corp’s ZPrint software. Make sure the model is carefully examined in ZPrint before actual printing.

  1. Internal Faces: You must avoid internal External faces. This naturally occurs when we cut a hexahedral volumetric model with our older plume module. The volumetric subset consists of hexahedron and tetrahedron cells. This creates surfaces that are internal to the model even though they represent the external faces of each set of cells. The real problem here is that the mating surfaces of each cell set are coincident (see 4 below). This major problem and many others are resolved by the intersection shell module.

  2. Normals: Must have all surface normals facing outward to define a solid volume for printing (handled by intersection shell module)

  3. Coincident surfaces: You CANNOT HAVE coincident surfaces. If two layers (or other objects) have coincident surfaces this will result in open parts and printing problems. You must separate the parts by a small amount (recommend 0.005 inches in final printed size) which should not be noticeable visually. Z-Print’s process will fuse these parts together (because there isn’t sufficient gap to keep them truly separate).

  4. Overlapping parts: This is supported. It is possible to have two closed volumes overlap each other and Z-Print will sort it out so long as 1, 2 and 3 above are still valid.

  5. Surfaces: Must be extruded or represented as a volumetric layer. Surfaces have no thickness and if placed coincident with the top of a volumetric object will result in leaving the volume OPEN (unclosed). This will cause serious problems.

  6. Cell Data: Another limitation is the inability to mix nodal and cell data. Since we use nodal data for so many things you should always strip out the cell data and use nodal data exclusively. You must be aware of the following:

    1. Ensure that there are no modules connected to the viewer that contain cell data. The safest way to ensure this is to pass questionable modules through extract_mesh with “Remove Cell Data” toggle ON. Normally you would want the “Remove Nodal Data” toggle OFF.
    2. If you want your cell data (colors) to be displayed, pass the cell data through the cell data to node data module. However be aware that you’ll still need to use extract_mesh afterwards because cell data to node data doesn’t remove the cell_data it just creates new nodal data from cell data.
    3. Typical modules that have cell data are import vector gis, lithologic modeling, Solid_3D_Set, Solid_contour_set, and most of the modules in the Cell Data library.
  7. Explode distance: Need to ensure that there is sufficient gap between exploded layers (separate parts) so that they don’t fuse together. Separation should be 1 mm (0.04 inches) minimum in the final print scale. Be aware that a 1 mm gap in the Z direction isn’t equivalent to a 1 mm separation if the mating parts have high slopes.  If your mating surfaces have a 45 degree slope, the separation is reduced by cos(45) (~0.7). If you have higher slopes such as 80 degrees, the factor would be ~0.17. This would mean that you would need a Z gap of nearly 6 mm to ensure a 1 mm separation between parts.

  8. Disconnected pieces: Although Z Print can print disconnected pieces, they won’t retain their spatial position. Plumes that aren’t connected to solid structure will just be loose pieces in the final print. This would also apply to post samples’ borings and spheres, unless they are connected by some common surface or geologic layer.

  9. Concepts that are NOT Supported:

    1. Points and Lines: Points and Lines cannot be printed (except as elements of an image used in a texture map). Lines must be converted to some 3D solid structure (such as closed tubes) and they must be of sufficient thickness to have some strength AND must not be disconnected pieces. Points should be represented as glyphs of sufficient size and not be disconnected.
    2. Transparency: Transparency as an object property cannot be supported since Z Print’s ink is printed onto opaque plaster or starch powder. The illusion of transparency could be achieved by creating a texture map that was a blend (using the image transition module) between two different images.
    3. Volume rendering: This is a subset of Transparency and therefore is not supported at all.
    4. Jitter: First, you must make sure that coincident surfaces are avoided anyway. Jitter is designed into EVS to allow preferential visualization of coincident objects. With Z Printing we cannot have coincidence in the first place! Offset the desired primary object to ensure that it is visible. Remember no lines and no surfaces!
  10. Thin sections: This is a somewhat subjective issue in that we really can’t tell you the definition of too fragile?. We would recommend a minimum thickness of 0.5 mm, but depending on the width (total cross sectional area of the section) this may be too fragile or exhibit too much distortion during curing. We still want to have lenses pinch out, but if sections get very thin, the pieces may break.

  11. Top View: You should write out the VRML file from a top view If there are any truly flat (horizontal) surfaces, this keeps them flatter and smoother. Also, it helps to keep the models with the largest dimensions in the x-y plane (rather than z). This speeds up printing.

  • scat_to_unif

    scat_to_unif The scat_to_unif module is used to convert scattered sample data into a three-dimensional uniform field. Also, scat_to_unif can be used to take an existing grid (for example a UCD file) and convert it to a uniform field. scat_to_unif converts a field of non-uniformly spaced points into a uniform field which can be used with many of EVS’s filter and mapper modules. “Scattered sample data " means that there are disconnected nodes in space. An example would be geology or analyte (e.g. chemistry) data where the coordinates are the x, y, and elevation of a measured parameter. The data is “scattered” because there isn’t data for every x/y/elevation of interest.

  • merge_fences

    merge_fences The merge_fences module is used to merge the output from multiple krig_fence modules into one data set (i.e., to merge cross sections into a fence diagram). This is useful for performing uniform data manipulation procedures on fence data from several krig_fence outputs. For example, if several krig_fence modules are used, they should all pass through a merge_fences module before being passed to explode and scale. Therefore, all fences will be exploded and scaled the same amount and only one dialog box is needed to control all fences. merge_fences should always be used when more than one krig_fence module is used.

  • project field

    project_field General Module Function The project_field module is used to project the coordinates in any field, from one coordinate system to another. Module Control Panel The control panel for project_field is shown in the figure above. Each coordinate system is divided into either Geographic or Projected coordinate systems. The coordinate system types are navigated by selecting the appropriate system type in the far left window. When a general coordinate system has been selected a specific coordinate system can be selected from the center window. If there are any details regarding the selected specific coordinate system, they will appear in the text window on the right. A specific coordinate system must be selected both to project from and to project to as in the picture below.

  • geologic_surfmap

    geologic_surfmap This module is deprecated and replaced by project onto surface. geologic_surfmap provides a mechanism to drape lines onto Geologic surfaces. It compares to project onto surface, but lines are not subsetted to match the size of the cells of the surface on which the lines are draped. In other words, only the endpoints of each line segment are draped.

  • time_field

    time_field The time_field module allows you to extract a field (grid with data) from a set of time-based fields. The time for the extracted field can be any time between the start and end of the set of fields. It will interpolate between adjacent known times.

  • video_safe_area

    video_safe_area The video_safe_area module is used when creating an animation for DVD or Video. It displays the areas that are usable for both text and animation purposes for several standard video formats. This allows you to properly setup your animation in order to get the best possible output on multiple television sets.

  • advector

    advector The advector module combines streamlines capability and a tool for sequential positioning of glyphs along the streamlines trajectory to simulate advection of weightless particles through a vector field (for example, a fluid flow simulation such as modflow). The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector it passes. At least one of the nodal data components input to advector must be a vector. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

  • modpath_advector

    modpath_advector The modpath_advector module combines MODPATH capability and a tool for sequential positioning of glyphs along the MODPATH lines trajectory to simulate advection of weightless particles through a vector field. The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector it passes. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

  • read symbols

    read symbols The read symbols module creates symbolic representations of different borehole identifiers based on a set of user defined parameters. The symbols are displayed at the top of the each borehole based on its x,y & z coordinates. A sample file with 48 predefined symbols is included, but it can be customized to produce special symbols.

  • create_spheroid

    create_spheroid This module is deprecated and replaced by place_glyph The create_spheroid module produces a 2D circular disc or 3D spheroidal or ellipsoidal grid that can be used for any purpose, however the primary application is as starting points for 3d streamlines or advector. Module Input Ports Input Field [Field] Accepts a field to extract its extent Module Output Ports

  • advect_surface

    advect_surface The advect_surface module combines surface streamlines capability and a tool for sequential positioning of glyphs along the streamlines trajectory to simulate advection of particles down a surface. The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector. The direction of travel of streamlines can be specified to be downhill or uphill (for the slope case). The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

  • fence_geology

    fence_geology The fence_geology module uses data in specially formatted .geo files to model the surfaces of geologic layers in vertical planes, or cross sections. Fence Geology essentially creates layers of quadrilateral (4 node) elements (in a vertical plane) in which each node (and element) is assigned to an individual geologic layer. The output of fence_geology is a data field, consisting of a 2D line with each layers elevation as nodal data elements, that can be sent to the krig_fence and horizons to 3d modules where the quadrilateral elements are connected to the element nodes in adjacent geologic surfaces to create layers along the fence.

  • file_output

    file_output The file_output module creates a formatted string based upon the values passed to it. This string is then written to the selected ascii text file. Certain modules such as 3d estimation, krig_2d, and krig_fence output a formatted string for just this purpose.

  • adaptive_indicator_krig

    adaptive_indicator_krig adaptive_indicator_krig is an alternative geologic modeling concept that uses geostatistics to assign each cell’s lithologic material as defined in a pregeology (.pgf) file, to cells in a 3D volumetric grid. There are two methods of lithology assignment: Nearest Neighbor is a quick method that merely finds the nearest lithology sample interval among all of your data and assigns that material. It is very fast, but generally should not be used for your final work. Kriging provides the rigorous probabilistic approach to geologic indicator kriging. The probability for each material is computed for each cell center of your grid. The material with the highest probability is assigned to the cell. All of the individual material probabilities are provided as additional cell data components. This will allow you to identify regions where the material assignment is somewhat ambiguous. Needless to say, this approach is much slower (especially with many materials), but often yields superior results and interesting insights. adaptive_indicator_krig is an extension of the technology in lithologic modeling for several reasons:

  • krig_fence

    krig_fence krig_fence models parameter distributions within domains defined by the boundaries of the input data in 3D Fence sections which can “snake” around in the x-y plane and are parallel to the z-axis. krig_fence can also receive the geologic system modeled by Fence Geology. It creates a quadrilateral finite-element grid with kriged nodal values of any scalar property and its kriged confidence level, and outputs a geometry whose elements can be rendered to view the color scaled parameter distribution on the element surfaces. krig_fence provides several convenient options for pre- and post-processing the input parameter values, and allows the user to consider anisotropy in the medium containing the property.

  • fence_geology_map

    fence_geology_map The fence_geology_map module creates 3-dimensional fence diagram from the 1-dimensional line contours which follow your geology produced by fence_geology, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom lines defining each geologic layer. The number of nodes specified for the Z Resolution may be distributed (proportionately) over the geologic layers in a manner that is approximately proportional to the fractional thickness of each layer relative to the total thickness of the geologic domain. In this case, at least three layers of nodes (2 layers of elements) will be placed in each geologic layer.

  • application_notes

    application_notes The application_notes has been deprecated and replaced by the Annotation’s “Notes”

  • texture_colors

    texture_colors This is a deprecated module texture_colors functionality has been incorporated into all modules. On the Home tab, you have the Render Method selector where you can choose to use Vertex RGB coloring or Textures.

  • texture_wave

    texture_wave The texture_wave module utilizes transparency and texture mapping similar to texture_colors and illuminated_lines technology to create an animated effect. However, unlike illuminated_lines, this module works with both OpenGL and Software Rendering. texture_wave has a single input port that accepts the grid with nodal data that you want to color with this technique. This would normally be tubes or streamribbons.

  • illuminated_lines

    illuminated_lines Display of Illuminated Lines using texture mapped illumination model on polylines with line halo and animation effects. Prerequisites This module requires OpenGL rendering to be selected. This module utilizes special OpenGL calls to implement the illuminated line technique. If this module is used with another renderer, such as the software renderer or the output_images module (not set to Automatic), lines will be drawn in the default mode with illuminated line features disabled.

Subsections of Deprecated

scat_to_unif

The scat_to_unif module is used to convert scattered sample data into a three-dimensional uniform field. Also, scat_to_unif can be used to take an existing grid (for example a UCD file) and convert it to a uniform field. scat_to_unif converts a field of non-uniformly spaced points into a uniform field which can be used with many of EVS’s filter and mapper modules. “Scattered sample data " means that there are disconnected nodes in space. An example would be geology or analyte (e.g. chemistry) data where the coordinates are the x, y, and elevation of a measured parameter. The data is “scattered” because there isn’t data for every x/y/elevation of interest.

scat_to_unif lets you define a uniform mesh of any dimensionality and coordinate extents. It superimposes the input grid over this new grid that you have defined. Then, for each new node, it searches the input grid’s neighboring original nodes (where search_cube controls the depth of the search) and creates data values for all the nodes in the new grid from interpolations on those neighboring actual data values. You can control the order of interpolation and what number to use as the NULL data value should the search around a node fail to find any data in the original input.

Module Input Ports

  • Input Field [Field] Accepts a data field

Module Output Ports

  • Output Data [Field] Outputs the volumetric uniform data field

merge_fences

The merge_fences module is used to merge the output from multiple krig_fence modules into one data set (i.e., to merge cross sections into a fence diagram). This is useful for performing uniform data manipulation procedures on fence data from several krig_fence outputs. For example, if several krig_fence modules are used, they should all pass through a merge_fences module before being passed to explode and scale. Therefore, all fences will be exploded and scaled the same amount and only one dialog box is needed to control all fences. merge_fences should always be used when more than one krig_fence module is used.

Module Input Ports

  • First Input Field [Field] Accepts a data field.
  • Second Input Field [Field] Accepts a data field.
  • Third Input Field [Field] Accepts a data field.
  • Fourth Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the field with all inputs merged

project_field

General Module Function

The project_field module is used to project the coordinates in any field, from one coordinate system to another.

Module Control Panel

The control panel for project_field is shown in the figure above.

Each coordinate system is divided into either Geographic or Projected coordinate systems. The coordinate system types are navigated by selecting the appropriate system type in the far left window. When a general coordinate system has been selected a specific coordinate system can be selected from the center window. If there are any details regarding the selected specific coordinate system, they will appear in the text window on the right. A specific coordinate system must be selected both to project from and to project to as in the picture below.

Module Input Ports

  • Input Field [Field] Accepts a data field.

Module Output Ports

  • Output Field [Field] Outputs the subsetted field as faces.

geologic_surfmap

This module is deprecated and replaced by project onto surface.

geologic_surfmap provides a mechanism to drape lines onto Geologic surfaces. It compares to project onto surface, but lines are not subsetted to match the size of the cells of the surface on which the lines are draped. In other words, only the endpoints of each line segment are draped.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Geologic Field [Field] Accepts a geologic field
  • Input Lines [Field] Accepts a field with the lines to be draped

Module Output Ports

  • Z Scale [Number] Outputs the Z Scale (vertical exaggeration).
  • Output Field [Field] Outputs the draped lines
  • Surface [Renderable]: Outputs the draped lines to the viewer.

time_field

The time_field module allows you to extract a field (grid with data) from a set of time-based fields. The time for the extracted field can be any time between the start and end of the set of fields. It will interpolate between adjacent known times.

video_safe_area

The video_safe_area module is used when creating an animation for DVD or Video. It displays the areas that are usable for both text and animation purposes for several standard video formats. This allows you to properly setup your animation in order to get the best possible output on multiple television sets.

image\\video_safe_area_viewer.jpg image\\video_safe_area_viewer.jpg

The VideoOutput Format changes the safe areas in the viewer window to match the default width and height values for the selected video format.

The Visible toggle turns the safe area display on and off. This toggle should always be off when making the actual video so the safe areas are not recorded.

The Move to Back toggle will put the safe area display behind any graphics in the viewer.

The Transparency slider changes the opacity of the safe area mask.

The Mask toggle turns the safe area masks on and off. The mask is a visual tool to help visualize which graphics fall into which safe area.

The Mask Text Area toggle turns the masking surrounding the text area on or off.

Mask Color alters the color of the masking.

The Lines toggle turns the lines defining the safe areas on and off.

The Labels toggle turns the labels defining the safe areas on and off.

The Action Border Color button selects the color of the action border.

The Text Border Color button selects the color of the text border.

Selecting Set viewer Res. sets the resolution of the viewer to the default for the video format that has been selected.

If the Preserve Width toggle is selected when the Set viewer Res. toggle is chosen, the current resolution width of the viewer will be maintained while the resolution height of the viewer will be based upon the appropriate ratio for the video format that has been selected.

If the Preserve Width toggle is unselected the Double Res toggle can be selected. The Double Res toggle will double the resolution of the viewer, while keeping the appropriate width-height ratio for the video format that has been selected. This should only be used while using the Screen Renderer output of output_images with the 4x4 anti-aliasing option.

The Update viewer button will set the viewer to the correct width and height if the Set viewer Res toggle has been selected.

advector

The advector module combines streamlines capability and a tool for sequential positioning of glyphs along the streamlines trajectory to simulate advection of weightless particles through a vector field (for example, a fluid flow simulation such as modflow). The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector it passes. At least one of the nodal data components input to advector must be a vector. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with vector data.
  • Input Starting Locations [Field] Accepts a data field.
  • Input Glyph [Field] Accepts a field representing the glyphs

Module Output Ports

  • Output Field [Field] Outputs the glyphs
  • Output Streamlines [Field] Outputs the streamlines field
  • Output Glyph [Renderable]: Outputs the glyphs to the viewer.
  • Output Streamlines Object [Renderable]: Outputs the streamlines to the viewer.

modpath_advector

The modpath_advector module combines MODPATH capability and a tool for sequential positioning of glyphs along the MODPATH lines trajectory to simulate advection of weightless particles through a vector field. The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector it passes. The direction of travel of streamlines can be specified to be forwards (toward high vector magnitudes) or backwards (toward low vector magnitudes) with respect to the vector field. The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with vector data.
  • Input Starting Locations [Field] Accepts a data field.
  • Input Glyph [Field] Accepts a field representing the glyphs

Module Output Ports

  • Output Field [Field] Outputs the glyphs
  • Output Streamlines [Field] Outputs the streamlines field
  • Output Glyph [Renderable]: Outputs the glyphs to the viewer.
  • Output Streamlines Object [Renderable]: Outputs the streamlines to the viewer.

read symbols

The read symbols module creates symbolic representations of different borehole identifiers based on a set of user defined parameters. The symbols are displayed at the top of the each borehole based on its x,y & z coordinates. A sample file with 48 predefined symbols is included, but it can be customized to produce special symbols.

Each symbol is made up of three components. The first shape is a fixed polygon with an outline. The thickness of the outline is selectable (via the control panel). A second polygon, which overlaps the first and has the same number of sides, has selectable minimum and maximum radial values (via the .SYM file). The third component is made up of a user defined set of lines (0 gives no lines). Each polygon has the same number of faces as defined in the #face parameter in the .SYM file. The area created by the difference between the Rmin value and the Rmax value is solid.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration) from other modulesInput Geologic Field [Field] Accepts a data field from gridding and horizons to krige data into geologic layers.
  • Filename [String / minor] Allows the sharing of file names between similar modules.

Module Output Ports

  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Sample Symbols [Renderable]: Outputs to the viewer

EVS.SYM file:

The following is a listing of the file evs.sym in evs\data\special. This file can be customized to produce other symbols.

# rmin rmax lmin lmax #face #line bw rot lrot rvrs name

48

1 0. 1 1 1 12 0 1 0 0 0 solid fill circle

2 0. .7 .7 1.2 12 4 1 0 0 0 solid fill circle w/ line

3 .8 1 1 1 12 0 1 0 0 0 circle ring

4 .4 1 1 1 12 0 1 0 0 0 fat circle ring

5 .0 .4 1 1 12 4 1 0 0 0 circle ring w/lines

6 .8 .7 .7 1.2 12 4 1 0 0 0 circle ring w/lines

7 .4 1 1 1 4 0 1 0 0 0 fat square box

8 .8 1 1 1 4 0 1 45 0 0 thin square box

9 .0 1 1 1 4 0 1 45 0 0 solid square box

10 .0 .7 .7 1.2 12 4 2 30 -30 0 half moon bk top w/line

11 .0 .7 .7 1.2 12 4 2 300 -300 0 half moon bk rt w/line

12 .0 .7 .7 1.2 12 4 2 210 -210 0 half moon bk bot w/line

13 .0 .7 .7 1.2 12 4 4 30 -30 0 qrtr moon bk ul w/line

14 .0 .7 .7 1.2 12 4 4 120 -120 0 qrtr moon bk ur w/line

15 .8 .7 0 1.2 12 4 1 0 0 0 open bulls-eye

16 .0 .7 .7 1.2 12 4 2 120 -120 0 half moon bk lft w/line

17 .0 1 1 1. 3 0 1 30 0 0 solid black triangle

18 .8 .7 .7 1.2 3 3 1 90 0 0 hollow blk triangle w/line

19 .0 1 1 1. 3 0 1 90 0 0 solid black triangle

20 .8 .7 .7 1.2 4 4 1 0 0 0 diamond w/line

21 .8 1 1 1. 4 0 1 0 0 0 diamond

22 .0 .7 .7 1.2 4 4 1 0 0 0 solid diamond w/line

23 .0 .7 .7 1.2 6 6 4 0 0 0 hex moon bk ul w/line

24 .0 .7 .7 1.2 6 6 4 180 0 0 hex moon bk ul w/line

25 0. 1 1 1 12 0 1 0 0 1 solid fill circle

26 0. .7 .7 1.2 12 4 1 0 0 1 solid fill circle w/ line

27 .8 1 1 1 12 0 1 0 0 1 circle ring

28 .4 1 1 1 12 0 1 0 0 1 fat circle ring

29 .0 .4 1 1 12 4 1 0 0 1 circle ring w/lines

30 .8 .7 .7 1.2 12 4 1 0 0 1 circle ring w/lines

31 .4 1 1 1 4 0 1 0 0 1 fat square box

32 .8 1 1 1 4 0 1 45 0 1 thin square box

33 .0 1 1 1 4 0 1 45 0 1 solid square box

34 .0 .7 .7 1.2 12 4 2 30 -30 1 half moon bk top w/line

35 .0 .7 .7 1.2 12 4 2 300 -300 1 half moon bk rt w/line

36 .0 .7 .7 1.2 12 4 2 210 -210 1 half moon bk bot w/line

37 .0 .7 .7 1.2 12 4 4 30 -30 1 qrtr moon bk ul w/line

38 .0 .7 .7 1.2 12 4 4 120 -120 1 qrtr moon bk ur w/line

39 .8 .7 0 1.2 12 4 1 0 0 1 open bulls-eye

40 .0 .7 .7 1.2 12 4 2 120 -120 1 half moon bk lft w/line

41 .0 1 1 1. 3 0 1 30 0 1 solid black triangle

42 .8 .7 .7 1.2 3 3 1 90 0 1 hollow blk triangle w/line

43 .0 1 1 1. 3 0 1 90 0 1 solid black triangle

44 .8 .7 .7 1.2 4 4 1 0 0 1 diamond w/line

45 .8 1 1 1. 4 0 1 0 0 1 diamond

46 .0 .7 .7 1.2 4 4 1 0 0 1 solid diamond w/line

47 .0 .7 .7 1.2 6 6 4 0 0 1 hex moon bk ul w/line

48 .0 .7 .7 1.2 6 6 4 180 0 1 hex moon bk ul w/line

sym #

Use to number(label) each symbols algorithm. This is the same

number used in the last column of the APDV data file.

Rmin, Rmax, Lmin, and Lmax

These values determine the size of the three possible shapes used to create each symbol. The center point is at 0.0 and the outer edge of the polygons is at 1.0. The x/y lines can start at the center(0.0) or at any other position within the polygon. They can also be extended beyond 1.0 to a position of 1.7.

Rmin

Sets the minimum radius of the inside of the second polygon. With a setting of 0.0 the inside is fully minimized thus creating a solid polygon from the center out to Rmax. A setting of 0.8 will create a solid polygon, with an empty center, out to Rmax.

Rmax

Sets the maximum radius of the outside of the second polygon. A setting of 1.0, places the outside edge directly over the outside edge of the first, fixed polygon. A setting of 0.2 and a Rmin setting of 0.0 creates a small solid polygon centered in the middle of the first polygon.

Lmin

Sets the starting point for the x/y lines. 0.0 starts the lines from the center of the polygons. 1.0 starts the lines at the outer edge of the polygons.

Lmax

Determines how far the lines will extend from Lmin. If Lmax and Lmin equal 1.0 then no lines will be displayed. If Lmin is 0.0 and Lmax is 1.7 the lines will extend from the center past the outer edge of the polygons.

#face

This value determines the number of faces both polygons will display. A value of 12 displays a convincing circle.

#line

This value determines the number of lines.

bw

This parameter allows you to divide the second polygon into alternating light/dark solids with a x/y axis.

Valid values are 1, 2 and 4.

1 = full solid

2 = half solid

3 = alternating quarter solids

rot

Sets the rotation of the symbol in degrees.

lrot

Sets the rotation of the lines relative to the symbol in degrees.

rvrs

Use this parameter to reverse the symbols colors. A value of 0 is normally used but a value of 1 will reverse the colors.

name

an optional description of each symbol. This is only used for reference within the SYM file.

Sample Module Networks

The sample network shown below reads a GEO formatted data file, and a SYM formatted algorithm file. The output is displayed by the geometry viewer.

Symbols

|

|

EVS viewer

A test geology file is included in the evs\special directory called TEST_SYM.GEO. It displays all 48 of the default symobls defined in the file shown above. The symbols are oriented starting at the lower left hand corner and going left to right and bottom to top.

create_spheroid

This module is deprecated and replaced by place_glyph

The create_spheroid module produces a 2D circular disc or 3D spheroidal or ellipsoidal grid that can be used for any purpose, however the primary application is as starting points for 3d streamlines or advector.

Module Input Ports

  • Input Field [Field] Accepts a field to extract its extent

Module Output Ports

  • Output Field [Field / Minor] Outputs the surface
  • Surface [Renderable]: Outputs to the viewer

advect_surface

The advect_surface module combines surface streamlines capability and a tool for sequential positioning of glyphs along the streamlines trajectory to simulate advection of particles down a surface. The result is an animation of particle motion, with the particles represented as any EVS geometry (such as a jet or a sphere). The glyphs can scale, deflect or deform according to the velocity vector. The direction of travel of streamlines can be specified to be downhill or uphill (for the slope case). The input glyphs travel along streamlines (not necessarily visible in the viewer) which are produced by integrating a velocity field using the Runge-Kutte method of specified order with adaptive time steps.

The advect_surface module is used to produce streamlines and particle animations on any surface based on its slopes. The direction of travel of streamlines can be specified to be downhill or uphill for the slope case. A physics simulation option is also available which employs a full physics simulation including friction and gravity terms to compute streamlines on the surface.

Module Input Ports

  • Z Scale [Number] Accepts Z Scale (vertical exaggeration).
  • Input Field [Field] Accepts a field with vector data.
  • Input Starting Locations [Field] Accepts a data field.
  • Input Glyph [Field] Accepts a field representing the glyphs

Module Output Ports

  • Output Field [Field] Outputs the glyphs
  • Output Streamlines [Field] Outputs the streamlines field
  • Output Glyph [Renderable]: Outputs the glyphs to the viewer.
  • Output Streamlines Object [Renderable]: Outputs the streamlines to the viewer.

fence_geology

The fence_geology module uses data in specially formatted .geo files to model the surfaces of geologic layers in vertical planes, or cross sections. Fence Geology essentially creates layers of quadrilateral (4 node) elements (in a vertical plane) in which each node (and element) is assigned to an individual geologic layer. The output of fence_geology is a data field, consisting of a 2D line with each layers elevation as nodal data elements, that can be sent to the krig_fence and horizons to 3d modules where the quadrilateral elements are connected to the element nodes in adjacent geologic surfaces to create layers along the fence.

Module Input Ports

  • Input Filename [String] Receives the filename from other modules.
  • Input Line [Field ] Allows the user to import a line (path) to which all data will be kriged.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Line [Field] Connects to krig_fence
  • Filename [String / minor] Outputs a string containing the file name and path. This can be connected to other modules to share files.

file_output

The file_output module creates a formatted string based upon the values passed to it. This string is then written to the selected ascii text file. Certain modules such as 3d estimation, krig_2d, and krig_fence output a formatted string for just this purpose.

adaptive_indicator_krig

adaptive_indicator_krig is an alternative geologic modeling concept that uses geostatistics to assign each cell’s lithologic material as defined in a pregeology (.pgf) file, to cells in a 3D volumetric grid.

There are two methods of lithology assignment:

  • Nearest Neighbor is a quick method that merely finds the nearest lithology sample interval among all of your data and assigns that material. It is very fast, but generally should not be used for your final work.
  • Kriging provides the rigorous probabilistic approach to geologic indicator kriging. The probability for each material is computed for each cell center of your grid. The material with the highest probability is assigned to the cell. All of the individual material probabilities are provided as additional cell data components. This will allow you to identify regions where the material assignment is somewhat ambiguous. Needless to say, this approach is much slower (especially with many materials), but often yields superior results and interesting insights.

adaptive_indicator_krig is an extension of the technology in lithologic modeling for several reasons:

  1. Material assignments are done on a nodal versus cell basis providing additional inherent resolution
  2. Gridding is handled by outside modules. This allows for assigning material data based on a PGF file after kriging analyte (e.g. chemistry) or other parameter data with 3d estimation.
  3. Though it does not provide material boundaries that are as smooth as gridding and horizons, it does provide much smoother interfaces than lithologic modeling’s Lego-like material structures.

There are two fundamental differences between lithologic modeling and adaptive_indicator_krig

  1. Geology / Grid input:
    1. lithologic modeling expects input from modules like gridding and horizons (which is a set of surfaces) and it builds you grid for you just as 3d estimation does.
    2. adaptive_indicator_krig is more like the “Kriging to an external grid” option in 3d estimation. You need to create the 3D grid (which doesn’t need to have any data) that it will use. It will take that grid as a starting point for material assignments and later smoothing.
  2. Lithologic Material Assignment
    1. lithologic modeling assigns whole cells to cell sets and sets CELL data which is Material_ID.
    2. adaptive_indicator_krig takes the external grid and further refines it by splitting whole cells along all boundaries between two or more materials to create smoother interfaces.

Module Input Ports

  • Input Field [Field] Accepts a data from 3d estimation, horizons to 3d or other modules that have already created a grid containing volumetric cells. If the input field has data such as concentrations, it will be included in the output.
  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Refine Distance [Number] Accepts the distance used to discretize the lithologic intervals into points used in kriging.

Module Output Ports

  • Geologic legend Information [Geology legend] Supplies the geologic material information for the legend module.
  • Output Field [Field] Contains nodal data and a refined grid representing geologic materials..
  • Filename [String / minor] Outputs a string containing the file name and path. This can be connected to other modules to share files.
  • Refine Distance [Number] Outputs the distance used to discretize the lithologic intervals into points used in kriging or displayed in post_samples as spheres.

Properties and Parameters

The Properties window is arranged in the following groups of parameters:

  • Grid Settings: control the grid type, position and resolution
  • Krig Settings: control the estimation methods
    • NOTE: Nearest Neighbor assigns the lithologic material cell data based on the nearest lithologic material (in anisotropic space) to your PGF borings. This is done based on the cell center (coordinates) and an enhanced refinement scheme for the PGF borings. In general Nearest Neighbor should not be used for final results

Advanced Variography Options:

It is far beyond the scope of our Help to attempt an advanced Geostatistics course. The terminology and variogram plotting style that we use is industry standard and we do so because we will not provide detailed technical support nor complete documentation on these features, which would effectively require a geostatistics textbook, in our help.

However, we have offered an online course on how to take advantage of the complex, directional anisotropic variography capabilities in adaptive_indicator_krig(which applies equally well to lithologic modeling and 3d estimation), and that course is available as a recorded video class. This class is focused on the mechanics of how to employ and refine the variogram anisotropy with respect to your data and the physics of your project such as contaminated sediments in a river bottom. The variogram is displayed as an ellipsoid which can be distorted to represent the Primary and Secondary anisotropies and rotated to represent the Heading, Dip and Roll. Overall scale and translation are also provided as additional visual aids to compare the variogram to the data, though these do not affect the actual variogram.

We are not hiding this capability from you as the Anisotropic Variography Study folder of Earth Volumetric Studio Projects contains a number of sample applications which demonstrate exactly what is described above. However, we assure you that understanding how to apply this to your own projects will be quite daunting and really does require a number of prerequisites:

  • A thorough explanation of these complex applications
  • A reasonable background in Python and how to use Python in Studio
  • An understanding of all of the variogram parameters and their impact on the estimation process on both theoretical datasets as well as real-world datasets.

This 3 hour course addresses this issues in detail.

krig_fence

krig_fence models parameter distributions within domains defined by the boundaries of the input data in 3D Fence sections which can “snake” around in the x-y plane and are parallel to the z-axis. krig_fence can also receive the geologic system modeled by Fence Geology. It creates a quadrilateral finite-element grid with kriged nodal values of any scalar property and its kriged confidence level, and outputs a geometry whose elements can be rendered to view the color scaled parameter distribution on the element surfaces. krig_fence provides several convenient options for pre- and post-processing the input parameter values, and allows the user to consider anisotropy in the medium containing the property.

Module Input Ports

  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Fence Geology Input [Field] Accepts a field from krig_fence containing geologic layers.
  • Input External Data [Field / minor] Allows the user to import a field contain data. This data will be kriged to the grid instead of using file data.

Module Output Ports

  • Filename [String / minor] Allows the sharing of file names between similar modules.
  • Output Field [Field] Outputs a 3D data field which can be input to any of the Subsetting and Processing modules.
  • Status Information [String / minor] Outputs a string containing module parameters. This is useful for connection to write evs field to document the settings used to create a grid.

fence_geology_map

The fence_geology_map module creates 3-dimensional fence diagram from the 1-dimensional line contours which follow your geology produced by fence_geology, to allow visualizations of the geologic layering of a system. It accomplishes this by creating a user specified distribution of nodes in the Z dimension between the top and bottom lines defining each geologic layer.

The number of nodes specified for the Z Resolution may be distributed (proportionately) over the geologic layers in a manner that is approximately proportional to the fractional thickness of each layer relative to the total thickness of the geologic domain. In this case, at least three layers of nodes (2 layers of elements) will be placed in each geologic layer.

Module Input Ports

  • Input Geologic Field [Field] Accepts fence_geology output

Module Output Ports

  • Output Field [Field] Outputs the field

application_notes

The application_notes has been deprecated and replaced by the Annotation’s “Notes”

texture_colors

This is a deprecated module

texture_colors functionality has been incorporated into all modules. On the Home tab, you have the Render Method selector where you can choose to use Vertex RGB coloring or Textures.

texture_wave

The texture_wave module utilizes transparency and texture mapping similar to texture_colors and illuminated_lines technology to create an animated effect. However, unlike illuminated_lines, this module works with both OpenGL and Software Rendering.

texture_wave has a single input port that accepts the grid with nodal data that you want to color with this technique. This would normally be tubes or streamribbons.

The Phase is the parameter that changes during the animation loop.

Number of Steps: determines the number of steps in the animation.

Texture Resolution is the internal resolution of the image used for texture-coloring.

Min Amplitude is the minimum opacity of the objects.

Max Amplitude is the maximum opacity of the objects.

Contrast affects the contrast (similar to color saturation).

In the image below, we used streamlines which are passed to tubes, which are then connected to texture_wave. The transparency, colors, and animation effects on the tubes is all performed by texture_wave.

The viewer window is shown below.

illuminated_lines

Display of Illuminated Lines using texture mapped illumination model on polylines with line halo and animation effects.

Prerequisites

This module requires OpenGL rendering to be selected. This module utilizes special OpenGL calls to implement the illuminated line technique. If this module is used with another renderer, such as the software renderer or the output_images module (not set to Automatic), lines will be drawn in the default mode with illuminated line features disabled.

This module requires the input mesh to contain one Polyline cell set. Any other type of cell set will be rejected, and any additional cell sets will be ignored. Any scalar node data may be present, or none for purely geometric display.

Animation Effects

Ramped/Stepped This choice selects the style of effect variation. Stair creates a linearly increasing or decreasing value, while step makes a binary chop effect. In Ramped mode, the blending can be selected to start small then get big, or the reverse or both. The values are down, up, up&down respectively. Stepped causes abrupt changes in effect.

AnimatedLength This slider sets the length of the effect along the polyline.

AnimationSpacing This slider sets the spacing between effects along the line.

ModulateOpacity In this mode the line segment varies in transparency from completely transparent to opaque.

ModulateWidth In this mode the line width is varied between 1 (very thin) to fat, based on the effect modes and shape controls.

Reverse Effect As the animation effect is applied between two zones, such as the dash and the space between the dash, this toggle reverses the area where the effect is applied.

Halo Parameters

Halo Width The width control for the halo effect defines the size of the transparent mask region added to the edge of each line. A value of zero turns off the halo effect.

Illuminated Lines Shading Model

AmbientLighting This value provides a base shadow value, a constant added to all shading values.

DiffuseLighting Pure diffuse reflection term, amount of shading dependent on light angle

SpecularHighlights Amount of specular reflection hi-lights based on light and viewer angle

Specular Focus Tightness of specular reflection, low values are dull, wide reflections, high values are small spot reflections.

Line Width Controls line width. Normal 1-pixel lines are 1, can be increased in whole increments. Wide lines are drawn in 2D screen space, not full 3D ribbons. If you want full ribbons, use streamline module ribbon mode.

Line Opacity Variable transparency of all lines. A value of 1.0 is fully opaque, while a value of zero makes lines invisible.

DataColor Blending If node data is present, this controls the relative mix of data color and shading color. A value of zero sets full contribution of data color, while at 1.0 no data color is used and the line shade is dominated by illumination effects.

Smooth Shading This enables an additional interpolation mode for blended node data colors. In the off state, data is sampled once per line segment. When enabled, linear interpolation is used between end points of each segment. This can be helpful if large gradients are present on low resolution polylines.

Antialias This effect, sometimes called “smooth lines” blends the drawing of lines to create a smooth effect, reducing the effects of “jaggies” at pixel resolution.

Sort Trans This mode assists visual quality when transparency or antialiasing modes are used, helping to reduce artifacts caused by non-depth sorted line crossings.