Shatter#

silvimetric.commands.shatter.agg_list(data_in)#

Make variable-length point data attributes into lists

silvimetric.commands.shatter.arrange(points: DataFrame, leaf, attrs: list[str])#

Arrange data to fit key-value TileDB input format.

Parameters:
  • data – Tuple of indices and point data array (xis, yis, data).

  • leafsilvimetric.resources.extents.Extent being used.

  • attrs – List of attribute names.

Raises:

Exception – Missing attribute error.

Returns:

None if no work is done, or a tuple of indices and rearranged data.

silvimetric.commands.shatter.get_data(extents: Extents, filename: str, storage: Storage)#

Execute pipeline and retrieve point cloud data for this extent

Parameters:
Returns:

Point data array from PDAL.

silvimetric.commands.shatter.get_processes(leaves: Generator[Extents, None, None], config: ShatterConfig, storage: Storage) Bag#

Create dask bags and the order of operations.

silvimetric.commands.shatter.join(list_data: DataFrame, metric_data)#

Join the list data and metric DataFrames together.

silvimetric.commands.shatter.run(leaves: Generator[Extents, None, None], config: ShatterConfig, storage: Storage) int#

Coordinate running of shatter process and handle any interruptions

Parameters:
Returns:

Number of points processed.

silvimetric.commands.shatter.run_graph(data_in, metrics)#

Run DataFrames through metric processes

silvimetric.commands.shatter.shatter(config: ShatterConfig) int#

Handle setup and running of shatter process. Will look for a config that has already been run before and needs to be resumed.

Parameters:

configsilvimetric.resources.config.ShatterConfig.

Returns:

Number of points processed.

silvimetric.commands.shatter.write(data_in, storage, timestamp)#

Write cell data to database

Parameters:
  • data_in – Data to be written to database.

  • tdb – TileDB write stream.

Returns:

Number of points written.