SilviMetric depends upon Conda for packaging support. You must first install all of SilviMetric’s dependencies using Conda:

This tutorial shows you how to initialize, shatter, and extract data in SilviMetric using the Command Line Interface.

We are going to use the Autzen Stadium as our test example.


The Autzen Stadium has units in feet, and this can sometimes be a source of confusion for tile settings and such.


Open a Conda terminal and install necessary dependencies

conda env create \
    -f https://raw.githubusercontent.com/hobuinc/silvimetric/main/environment.yml  \
    -n silvimetric


We are installing the list of dependencies as provided by the SilviMetric GitHub listing over the internet.


If you are using windows, line continuation characters are ^ instead of \

  1. Activate the environment:

    conda activate silvimetric
  2. Install SilviMetric:

    pip install silvimetric


Initialize a SilviMetric database. To initialize a SilviMetric database, we need a bounds and a coordinate reference system.

  1. We first need to determine a bounds for our database. In our case,

    we are going to use PDAL and jq to grab our bounds

    pdal info https://s3.amazonaws.com/hobu-lidar/autzen-classified.copc.laz  \
        --readers.copc.resolution=1 | jq -c '.stats.bbox.native.bbox'

    Our boundary is emitted in expanded form.



    You can express bounds in two additional formats for SilviMetric:

    • [635579.2, 848884.83, 639003.73, 853536.21][minx, miny, maxx, maxy]

    • ([635579.2,848884.83],[639003.73,853536.2])([minx, maxx], [miny, maxy])


    You can install jq by issuing conda install jq -y in your environment if you are on Linux or Mac. On Windows, you will need to download jq from the website and put it in your path. https://jqlang.github.io/jq/download/

  2. We need a coordinate reference system for the database. We will grab it from

    the PDAL metadata just like we did for the bounds.

    pdal info --metadata https://s3.amazonaws.com/hobu-lidar/autzen-classified.copc.laz  \
        --readers.copc.resolution=10 | \
        jq -c '.metadata.srs.json.components[0].id.code'

    Our EPSG code is in the pdal info --metadata output, and after extracted by jq, we can use it.



Both a bounds and CRS must be set to initialize a database. We can set them to whatever we want, but any data we are inserting into the database must match the coordinate system of the SilviMetric database.

  1. With bounds and CRS in hand, we can now initialize the database

    silvimetric autzen-smdb.tdb \
        initialize \
        '{"maxx":639003.73,"maxy":853536.21,"maxz":615.26,"minx":635579.2,"miny":848884.83,"minz":406.46}' \


Be careful with your shell’s quote escaping rules!


The scan command will tell us information about the pointcloud with respect to the database we already created, including a best guess at the correct number of cells per tile, or tile size.

silvimetric -d ${db_name} scan ${pointcloud}

We should see output like the output below, recommending we use a tile size of 185.

silvimetric - INFO - info:156 - Pointcloud information:
silvimetric - INFO - info:156 -   Storage Bounds: [635579.2, 848884.83, 639003.73, 853536.21]
silvimetric - INFO - info:156 -   Pointcloud Bounds: [635577.79, 848882.15, 639003.73, 853537.66]
silvimetric - INFO - info:156 -   Point Count: 10653336
silvimetric - INFO - info:156 - Tiling information:
silvimetric - INFO - info:156 -   Mean tile size: 91.51758793969849
silvimetric - INFO - info:156 -   Std deviation: 94.31396536316173
silvimetric - INFO - info:156 -   Recommended split size: 185


We can now insert data into the SMDB.

If we run this command without the argument –tilesize, Silvimetric will determine a tile size for you. The method will be the same as the Scan method, but will filter out the tiles that have no data in them.

silvimetric -d autzen-smdb.tdb \
   --threads 4 \
   --workers 4 \
   --watch \
   shatter \
   --date 2008-12-01 \

If we grab the tile size from the scan that we ran earlier, we’ll skip the filtering step.

silvimetric -d autzen-smdb.tdb \
   --threads 4 \
   --workers 4 \
   --watch \
   shatter \
   --tilesize 185 \
   --date 2008-12-01 \


After data is inserted, we can extract it into different rasters. When we created the database we gave it a list of Attributes and Metrics. When we ran Shatter, we filled in the values for those in each cell. If we have a database with the Attributes Intensity and Z, in combination with the Metrics min and max, each cell will contain values for min_Intensity, max_Intensity, min_Z, and max_Z. This is also the list of available rasters we can extract.

silvimetric -d autzen-smdb.tdb extract -o output-directory


We can query past shatter processes and the schema for the database with the Info call.

silvimetric -d autzen-smdb.tdb info --history

This will print out a JSON object containing information about the current state of the database. We can find the name key here, which necessary for Delete, Restart, and Resume. For the following commands we will have copied the value of the name key in the variable uuid.


We can also remove a shatter process by using the delete command. This will remove all data associated with that shatter process from the database, but will leave an updated config of it in the database config should you want to reference it later.

silvimetric -d autzen-smdb.tdb delete --id $uuid


If you would like to rerun a Shatter process, whether or not it was previously finished, you can use the restart command. This will call the delete method and use the config from that to re-run the shatter process.

silvimetric -d autzen-smdb.tdb restart --id $uuid


If a Shatter process is cancelled partway through, we can pick up where we left off with the Resume command.

silvimetric -d autzen-smdb.tdb resume --id $uuid