_images/KWIVER_logo.png

KWIVER User’s Guide

Contents:

_images/KWIVER_logo.png

Introduction

KWIVER is a fully featured toolkit for developing Computer Vision Systems, a capability that goes beyond the support of simply developing Computer Vision Software.

This distinction is an important one. There are myriad of software frameworks that facilitate the development of computer vision software, most notably the venerable OpenCV, but also including VXL, scikit-image and a wide range of others. The current Deep Learning revolution has additionally spawned a number of software frameworks for doing deep learning based computer vision including Caffe, PyTorch, Tensorflow and others.

Each of these frameworks has their own unique set of capabilities, target user community, dependencies and levels of difficulty and complexity. When developing computer vision software, the task frequently boils down to selecting the most appropriate framework to work with and proceeding from there.

As the task at hand becomes more complicated, however, the burden on the supporting frameworks, and the task-specific software developed using those frameworks, becomes heavier. Real world problems might be better solved by, for example, fusing OpenCV based motion detections with Faster-RCNN (Caffe) based appearance detections and then filtering the result against a new state-of-the-art image segmentation neural network that runs in yet another deep learning framework. Couple this with the understanding that computer vision algorithms traditionally are extremely compute intensive, doubly so when one considers the GPU requirements of modern deep learning frameworks and it is clear that building computer vision systems is a daunting task.

KWIVER is designed and engineered from the ground up to support the development of systems of this nature. It has first class features that are designed to allow the development of fully elaborated systems using a wide variety of computer vision frameworks – both traditional and deep learning based – and a wide variety of stream processing and multi-processing topologies. KWIVER based systems have scaled from small embedded computing platforms such as the NVIDIA TX2 to large cloud based infrastructure and a wide variety of platforms in between.

KWIVER is a collection of C++ libraries with C and Python bindings and uses an permissive BSD License.

Visit the repository on how to get and build the KWIVER code base

Vital

Vital is the core of KWIVER and is designed to provide data and algorithm abstractions with minimal library dependencies. Vital only depends on the C++ standard library and the header-only Eigen library. Vital defines the core data types and abstract interfaces for core vision algorithms using these types. Vital also provides various system utility functions like logging, plugin management, and configuration file handling. Vital does not provide implementations of the abstract algorithms. Implementations are found in the KWIVER Arrows and are loaded dynamically by vital at run-time via plugins.

The design of KWIVER allows end-user applications to link only against the Vital libraries and have minimal hard dependencies. One can then dynamically add algorithmic capabilities, with new dependencies, via plugins without needing to recompile Vital or the application code. Only Vital is built by default when building KWIVER without enabling any options in CMake. You will need to enable various Arrows in order for vital to instantiate those various implementations.

The Vital API is all that applications need to control the execute any KWIVER algorithm arrow. In the following sections we will breakdown the various the algorithms and data types provided in vital based on their functionality.

Common Structures

Iterators

Iterators provide a container-independent way to access elements in an aggregate structure without exposing or requiring any underlying structure. This vital implementation intends to provide an easy way to create an input iterator via a value reference generation function, similar to how iterators in the python language function.

Each iterator class descends from a common base-class due to a wealth of shared functionality. This base class has protected constructors in order to prevent direct use.

It is currently undefined behavior when dereferencing an iterator after source data has been released (e.g. if the next_value_func is iterating over a vector of values and the source vector is released in the middle of iteration).

Generation Function

The value reference generation function’s purpose is to return the reference to the next value of type T in a sequence every time it is called. When the end of a specific is reached, an exception is raised to signal the end of iteration.`

Upon incrementing this iterator, this next function is called and the reference it returns is retained in order to yield the value or reference when the * or -> operator is invoked on this iterator.

Generator function caveat: Since the next-value generator function is to return references, the function should ideally return unique references every time it is called. If this is not the case, the prefix-increment operator does not function correctly since the returned iterator copy and old, incremented iterator share the same reference and thus share the same value yielded.

Providing a function

Next value functions can be provided in various ways from existing functions to functions created on the fly. Usually, inline structure definitions or lambda functions are used to provide the next-value functionality.

For example, the following shows how to use a lambda function to satisfy the next_value_func parameter for a vital::iterator of type int:

int a[] = { 0, 1, 2, 3 };
using iterator_t = vital::iterator< int >;
iterator_t it( []() ->iterator_t::reference {
  static size_t i = 0;
  if( i == 4 ) throw vital::stop_iteration_exception();
  return a[i++];
} );

Similarly an inline structure that overloads operator() can be provided if more state needs to be tracked:

using iterator_t = vital::iterator< int >;
struct next_int_generator
{
  int *a;
  size_t len;
  size_t idx;

  next_int_generator(int *a, size_t len )
   : a( a )
   , len( len )
   , idx( 0 )
  {}

  iterator_t::reference operator()()
  {
    if( idx == len ) throw vital::stop_iteration_exception();
    return a[idx++];
  }
};
int a[] = {0, 1, 2, 3};
iterator_t it( next_int_generator(a, 4) );

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

References

In creating this structure, the following were referenced for what composes input iterators as well as how this fits into C++ class and function definitions:

Trackers

Coming Soon!

Activities

Coming Soon!

Configuration

In many computer software systems, system configuration is essentially an afterthought. Frequently, a few “.ini” or “.yaml” files are created that carry a some key configuration settings and that’s the end of it. In contrast to this, the configuration of a computer vision system is frequently a first order component of the system’s working technology. Computer vision algorithms tend to be highly configurable, exposing many different execution parameters and, perhaps more importantly, are very tunable for different operating conditions, performance profiles, and operational characteristics.

To facilitate this, KWIVER provides a hierarchical configuration system including a flexible configuration language that is used for virtually all aspects of KWIVER’s operation. The following sections will detail how to use KWIVER’s config_block architecture and the configuration language used to create and manipualte config_blocks.

Configuration Usage

Introduction

The vital config_block supports general configuration tasks where a general purpose key/value pair is needed. The configuration block is used to specify, communicate and set configurable items in many different situations. The two major users of the configuration support are algorithms and processes. In addition, there are a few other places that they are used also.

Configurations are ususally established in an external file which are read and converted to an internal config_block object. This is the typical way to control the behaviour of the software. Configuration blocks can also be created pragmatically such as when specifying an expected set of configurable items.

When algorithms are used within processes, the

From File to config_block

Using config_block_io to directly convert config file into block. This can be used by a main program that manages configs and algorithms directly. The read_congfig_file() uses a complex set of rules to locate config files based on host system and application name.

Configuration Features

config features. what they do and why you would want to use them - relativepath

  • macro providers and how they can be used to make portable and reusable config files
  • config sub-blocks and configuration context
Establishing Expected Config

Typically the expected config is formulated by creating a config block with all the expected keys, default values, and entry description. This is done for both algorithms and processes.

Don’t be shy with the entry description. This description serves as the design specification for the entry. The expected format is a short description followed by a longer detailed description separated by two new-lines, as shown below.:

config->set_value( "config_name", <default_value>,
                   "Short description.\n\n"
                   "Longer description which contains all information needed "
                   "to correctly specify this parameter including any range "
                   "limitations etc." );

The long description does not need any new-line characters for formatting unless specific formatting is desired. The text is wrapped into a text block by all tools that display it.

This expected configuration serves as documentation for the algorithm or process configuration items when it is displayed by the plugin_explorer and other tools. It is also used to validate the configuration supplied at run time to make sure all expected items are present.

Usage by Algorithms

Algorithms specify their expected set of configurable items in their get_configuration() method using the config_block set_value() method, described above.

The run time configuration is passed to an algorithm through the set_configuration() method. This method typically extracts the expected configuration values and saves them locally for the algorithm to use. When a configuration is read from the file, there is no guarantee that all expected configuration items are present and attempting to get a value that is not present generates an exception.

The recommended way to avoid this problem is to use the expected configuration, as created by the get_configuration() method to supply any missing entries. The following code snippet shows how this is done.:

// Set this algorithm's properties via a config block
void
<algorithm>
::set_configuration(vital::config_block_sptr in_config)
{
  // Starting with our generated vital::config_block to ensure that assumed values are present
  // An alternative is to check for key presence before performing a get_value() call.
  vital::config_block_sptr config = this->get_configuration();

  // Merge in supplied config to cause these values to overwrite the defaults.
  config->merge_config(in_config);

  // Get individual config entry values
  this->write_float_features = config->get_value<bool>("write_float_features",
                                                       this->write_float_features);
}
Instantiating Algorithms

Algorithms can be used directly in application code or they can be wrapped by a sprokit process. In either case the actual implementation of the abstract algorithm interface is specified through a config block.

Lets first look at the code that will instantiate the configured algorithm and then look at the contents of the configuration file.

The following code snippet instantiates a draw_detected_object_set algorithm.:

// this pointer will be used to reference the algorithm after it is created.
vital::algo::draw_detected_object_set_sptr m_algo;

// Get algorithm configuration
auto algo_config = get_config(); // or an equivalent call

// Check config so it will give run-time diagnostic of config problems
if ( ! vital::algo::draw_detected_object_set::check_nested_algo_configuration( "draw_algo", algo_config ) )
{
  LOG_ERROR( logger, "Configuration check failed." );
}

vital::algo::draw_detected_object_set::set_nested_algo_configuration( "draw_algo", algo_config, m_algo );
if ( ! d->m_algo )
{
  LOG_ERROR( logger, "Unable to create algorithm." );
}

After the configuration is extracted, it is passed to the check_nested_algo_configuration() method to determine if the configuration has the basic type entry and the requested type is available. If the type entry is missing or the specified implementation is not available, a detailed log message is generated with the available implementations.

If the configuration is acceptable, the set_nested_algo_configuration() call will actually instantiate and configure the selected algorithm implementation.

The name that is supplied to these calls, “draw_algo” in this case, is used access the configuration block for this algorithm.

The following configuration file snippet can be used to configure the above algorithm.:

block draw_algo
  type = ocv    # select the ocv instance of this algorithm

  block ocv     # configure the 'ocv' instance
    alpha_blend_prob   = true
    default_line_thickness   = 1.25
    draw_text   = false
  endblock # for ocv
endblock  # for draw_algo

The outer block labeled “draw_algo” specifies the configuration to be used for the above code snippet. The config entry “type” specifies which implementation of the algorithm to instantiate. The following block labeled “ocv” is used to configure the algorithm after it is instantiated. The block labeled “ocv” is used for algorithm type “ocv”. If the algorithm type was “foo”, then the block “foo” would be used to configure the algorithm.

Usage by Processes

The configuration for sprokit processes is presented slightly differently than for algorithms, but underneath, they both use the same structure.

Configuration items for a process are defined using create_config_trait() macro as shown below.:

//                    name,      type,  default,        description
create_config_trait( threshold, float, "-1", "min threshold for output (float).\n\n"
                     "Detections with confidence values below this threshold are not drawn." );

When the process is constructed all configuration parameters must be declared using the declare_config_using_trait() call, as shown below.:

declare_config_using_trait( threshold );

All configuration items declared in this way are available for display using the plugin_explorer tool.

Configuration values are extracted from the process configuration in the _configure() method of the process as shown below.:

float local_threshold = config_value_using_trait( threshold );

Processes can instantiate and configure algorithms using the approach described above.

Configuration for a process comes from a section of the pipe file. The following section of a pipe file shows configuration for a process which supplies the threshold configuration item.:

# ================================
process draw_boxes :: draw_detected_object_boxes
  threshold = 3.14
Verifying a Configuration

When a configuration file (or configuration section of a pipe file) is read in, there is no checking of the configuration key names. There is no way of knowing which configuration items are valid or expected and which ones are not. If a name is misspelled, which sometimes happens, it will be misspelled in the configuration block. This can lead to hours of frustration diagnosing a problem.

A configuration can be checked against a baseline using the config_difference class. This class provides methods to determine the differences between a reference configuration and one created from an input file. The difference between these two configurations is presented in two different ways. It provides a list of keys that are baseline config and not in the supplied config. These are the config items that were expected but not supplied. It also provides a list of keys that are in the supplied config but not in the expected config. These items are supplied but not expected.

The following code snippet shows how to report the difference between two config blocks.:

//                                    ref-config                received-config
kwiver::vital::config_difference cd( this->get_configuration(), config );
const auto key_list = cd.extra_keys();
if ( ! key_list.empty() )
{
  // This may be considered an error in some cases
  LOG_WARN( logger(), "Additional parameters found in config block that are not required or desired: "
            << kwiver::vital::join( key_list, ", " ) );
}

key_list = cd.unspecified_keys();
if ( ! key_list.empty() )
{
  LOG_WARN( logger(), "Parameters that were not supplied in the config, using default values: "
            << kwiver::vital::join( key_list, ", " ) );
}

Not all applications need to check both cases. There may be good reasons for not specifying all expected configuration items when the default values are as expected. In some cases, unexpected items that are supplied by the configuration may be indications of misspelled entries.

Config Management Techniques

The configuration file reader provides several alternatives for managing the complexity of a large configuration. The block / endblock construct can be used to shorted config lines and modularize the configuration. The include directove can be used to share or reuse portions of a config.

Starting with the example config section that selects an algorithm and configures it:

   algorithm_instance_name:type = type_name
   algorithm_instance_name:type_name:algo_param = value
   algorithm_instance_name:type_name:threshold = 234

The block construct can be used to simplify the configuration and
make it easier to navigate.::

 block algorithm_instance_name
   type = type_name
   block  type_name
     algo_param = value
     threshold = 234
   endblock
 endblock

In cases where the configuration block is extensive or used in multiple applications, that part of the configuration can exist as a stand-alone file and be included where it is needed.:

block  algorithm_instance_name
  include type_name.conf
endblock

where type_name.conf contains:

type = type_name
block   type_name
  algo_param = value
  threshold = 234
endblock

Environment variables and config macros can be combined to provide a level of adaptability to config files. Using the environment macro in an include directive can provide run time agility without requiring the file to be edited. The following is an example of selecting a different include file based on mode.:

include $ENV{MODE}/config.file.conf

Configuration File Format

Configuration files are used to establish a key, value store that is available within a program. The entries can be grouped in a hierarchy or blocks to aide in constructing complex configurations. This document describes the format and features of config file.

Syntax
Configuration Entries

Configuration entires are in a < key > = < value > format. The key specifies a name for the entry that is assigned the value. All values are treated as strings. No interpretation is done when reading configuration entries. All leading and trailing spaces are removed from the value string. Spaces embedded in the value portion are retained.

If the value string is enclosed in quotes, the quotes will become part of the value and passed to the program.

The simplest form of a config entry is::

simple = value

Configuration entries can be grouped so that entries for a specific can be specified as a subblock. For example configuration items for the foo algorithm can be specified as:

foo:mode = red
foo:sync = false
foo:debug = false

by prepending the block/subblock name before the name with a “:” separator. All conrig entries for foo can be extracted from the larger config into a subblock that is expected by the algorithm. Blocks can be nested to an arbitrary depth, as shown below.:

foo:bar:baz:arf:mode = blue

A configuration entry can be made read-only bp appending [RO] to the key string. Once an entry has been declared a read only, it cannot be assigned another value or deleted from the config.:

simple[RO] = value
Comments

Comments start wth the ‘#’ character and continue to the end of line. When a comment appears after a configuration value,

Block Specification

In some cases the fully qualified configuration key can become long and unwieldy. The block directive can be used to establish a configuration context to be applied to the enclosed configuration entries.:

block alg

Starts a block with the alg block name and all entries within the block will have alg: prepended to the entry name.:

block alg
   mode = red      # becomes alg:mode = red
endblock

Blocks can be nested to an arbitrary depth with each providing context for the enclosed entries.:

block foo
  block bar:fizzle
    mode = yellow     # becomes foo:bar:fizzle:mode = yellow
  endblock
endblock
Including Files

The include directive logically inserts the contents of the specified file into the current file at the point of the include directive. Include files provide an easy way to break up large configurations into smaller reusable pieces.

include filename

If the file name is not an absolute path, it is located by scanning the current config search path. The manner in which the config include path is created is described in a following section. If the file is still not found, the stack of include directories is scanned from the current include file back to the initial config file. Macro substitution, as described below, is performed on the file name string before the searching is done.

Block specifications and include directives can be used together to build reusable and shareable configuration snippets.:

block main
  block alg_one
    include alg_foo.config
  endblock

  block alg_two
    include alg_foo.config
  endblock
endblock

In this case the same configuration structure can be used in two places in the overall configuration.

Include files can be nested to an arbitrary depth.

Relativepath Modifier

There are cases where an algorithm needs an external file containing binary data that is tied to a specific configuration. These data files are usually stored with the main configuration files. Specifying a full hard coded file path is not portable between different users and systems.

The solution is to specify the location of these external files relative to the configuration file and use the relativepath modifier construct a full, absolute path at run time by prepending the configuration file directory path to the value.:

relativepath data_file = ../data/online_dat.dat

If the current configuration file is /home/vital/project/config/blue/foo.config, the resulting config entry for data_file will be /home/vital/project/config/blue/../data/online.dat

The relativepath modifier can be applied to any configuration entry, but it only makes sense to use it with relative file specifications.

Config File Include Path

Config file search paths are constructed differently depending on the target platform. The directories are searched in the order specified in the following sections.

Windows Platform
  • . (the current working directory
  • ${KWIVER_CONFIG_PATH} (if set)
  • $<CSIDL_LOCAL_APPDATA>/<app-name>[/<app-version>]/config
  • $<CSIDL_APPDATA>/<app-name>[/<app-version>]/config
  • $<CSIDL_COMMON_APPDATA>/<app-name>[/<app-version>]/config
  • <install-dir>/share/<app-name>[/<app-version>]/config
  • <install-dir>/share/config
  • <install-dir>/config
OS/X Apple Platform
  • . (the current working directory
  • ${KWIVER_CONFIG_PATH} (if set)
  • ${XDG_CONFIG_HOME}/<app-name>[/<app-version>]/config (if $XDG_CONFIG_HOME set)
  • ${HOME}/.config/<app-name>[/<app-version>]/config (if $HOME set)
  • /etc/xdg/<app-name>[/<app-version>]/config
  • /etc/<app-name>[/<app-version>]/config
  • ${HOME}/Library/Application Support/<app-name>[/<app-version>]/config (if $HOME set)
  • /Library/Application Support/<app-name>[/<app-version>]/config
  • /usr/local/share/<app-name>[/<app-version>]/config
  • /usr/share/<app-name>[/<app-version>]/config

If <install-dir> is not /usr or /usr/local:

  • <install-dir>/share/<app-name>[/<app-version>]/config
  • <install-dir>/share/config
  • <install-dir>/config
  • <install-dir>/Resources/config
Other Posix Platforms (e.g. Linux)
  • . (the current working directory
  • ${KWIVER_CONFIG_PATH} (if set)
  • ${XDG_CONFIG_HOME}/<app-name>[/<app-version>]/config (if $XDG_CONFIG_HOME set)
  • ${HOME}/.config/<app-name>[/<app-version>]/config (if $HOME set)
  • /etc/xdg/<app-name>[/<app-version>]/config
  • /etc/<app-name>[/<app-version>]/config
  • /usr/local/share/<app-name>[/<app-version>]/config
  • /usr/share/<app-name>[/<app-version>]/config

If <install-dir> is not /usr or /usr/local:

  • <install-dir>/share/<app-name>[/<app-version>]/config
  • <install-dir>/share/config
  • <install-dir>/config

The environment variable c KWIVER_CONFIG_PATH can be set with a list of one or more directories, in the same manner as the native execution PATH variable, to be searched for config files.

Macro Substitution

The values for configuration elements can be composed from static text in the config file and dynamic text supplied by macro providers. The format of a macro specification is $TYPE{name} where TYPE is the name of macro provider and name requests a particular value to be supplied. The name entry is specific to each provider.

The text of the macro specification is only replaced. Any leading or trailing blanks will remain. If the value of a macro is not defined, the macro specification will be replaced with the null string.

Macro Providers

The macro providers are listed below and discussed in the following sections.

  • LOCAL - locally defined values
  • ENV - program environment
  • CONFIG - values from current config block
  • SYSENV - system environment
LOCAL Macro Provider

This macro provider supplies values that have been stored previously in the config file. Local values are specified in the config file using the “:=” operator. For example the config entry mode := online makes $LOCAL{mode} available in subsequent configuration entries.:

 mode := online
 ...
config_file = data/$LOCAL{mode}/model.dat

This type of macro definition can appear anywhere in a config file and becomes available for use on the next line. The current block context has no effect on the name of the macro.

ENV Macro Provider

This macro provides gives access to the current program environment. The values of environment variables such as “HOME” can be used by specifying $ENV{HOME} in the config file.

CONFIG Macro Provider

This macro provider gives access to previously defined configuration entries. For example:

foo:bar = baz

makes the value available by specifying $CONFIG{foo:bar} to following lines in the config file as shown below.:

value = mode-$CONFIG{foo:bar}ify
SYSENV Macro Provider

This macro provider supports the following symbols derived from the current host operating system environment.

  • cwd - current working directory
  • numproc - number of processors in the current system
  • totalvirtualmemory - number of KB of total virtual memory
  • availablevirtualmemory - number of KB of available virtual memory
  • totalphysicalmemory - number of KB of total physical memory
  • availablephysicalmemory - number of KB of physical virtual memory
  • hostname - name of the host computer
  • domainname - name of the computer in the domain
  • osname - name of the host operating system
  • osdescription - description of the host operating system
  • osplatform - platorm name (e.g. x86-64)
  • osversion - version number for the host operating system
  • iswindows - TRUE if running on Windows system
  • islinux - TRUE if running on Linux system
  • isapple - TRUE if running on Apple system
  • is64bits - TRUE if running on a 64 bit machine

Arrows

Arrows is the collection of plugins that provides implementations of the algorithms declared in Vital. Each arrow can be enabled or disabled in build process through CMake options. Most arrows bring in additional third-party dependencies and wrap the capabilities of those libraries to make them accessible through the Vital APIs. The code in Arrows also converts or wrap data types from these external libraries into Vital data types. This allows interchange of data between algorithms from different arrows using Vital types as the intermediary.

Capabilities are currently organized into Arrows based on what third party library they require. However, this arrangement is not required and may change as the number of algorithms and arrows grows. Some arrows, like core , require no additional dependencies. The provided Arrows are:

Core

Class Probablity Filter Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Close Loops Bad Frames Only Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Close Loops Exhaustive Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Close Loops Keyframe Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Close Loops Multi Method Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Compute Ref Homography Core Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Convert Image Bypass Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detected Object Set Input csv Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detected Object Set Input kw18 Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detected Object Set Output csv Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detected Object Set Output kw18 Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Dynamic Config None Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Canonical Transform Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Example Detector Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Feature Descriptor I/O Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Filter Features Magnitude Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Filter Fatures Scale Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Filter Tracks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Formulate Query Core Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Hierarchical Bundle Adjust Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Initialize Cameras Landmarks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Fundamental Matrix Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Homography Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Track Descriptor Set Output csv Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Track Features Core Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Frame Index Track Set Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Triangulate Landmarks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Video Input Filter Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Video Input Image_list Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Video Input Pos Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Video Input Split Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Burnout

Burnout Track Descriptors Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Ceres

Bundle Adjust Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Optimize Cameras Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Camera Position Smoothness Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Camera Limit Forward Motion Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Distortion Poly Radial Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Distortion Poly Radial Tangential Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Distortion Ratpoly Radial Tangential Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Create Cost Func Factory

Warning

doxygenfunction: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Darknet

Darknet Detector Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Darknet Trainer Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

FAQ

I am running out of memory in CUDA…
Try one or both of these suggestions: - Change the darknet/models/virat.cfg variables height,weight to smaller powers of 32 - Change the darknet/models/virat.cfg variables batch and subdivisions (make sure they are still the same)

Matlab

Coming Soon!

OpenCV

This arrow is a collection of vital algorithms implemented with the OpenCV API

This arrow can be built by enabling the KWIVER_ENABLE_OPENCV CMake flag

This arrow contains the following functionality:

Analyze Tracks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features AGAST Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features FAST Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features GFTT Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features MSD Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features MSER Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features Simple BLOB Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features STAR Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Draw Detected Object Set Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Draw Tracks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Fundamental Matrix Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Homography Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors BRIEF Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors DAISY Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors FREAK Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors LATCH Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors LUCID Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extrack Descriptors BRISK Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features BRISK Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extrack Descriptors ORB Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features ORB Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extrack Descriptors SIFT Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features SIFT Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extrack Descriptors SURF Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features SURF Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Hough Circle Detector Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image Container Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image I/O Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Bruteforce Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Flannbased Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Refine Detections Write To Disk Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Split Image Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Proj4

Geo Conversion Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

UUID

Analyze Tracks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

VisCL

Convert Image Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Descriptor Set Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Detect Features Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Extract Descriptors Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Feature Set Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image Container Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Set Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Min Iimage Dimensions

Warning

doxygenfunction: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

VXL

This arrow is a collection of vital algorithms implemented with the VXL API

This arrow can be built by enabling the KWIVER_ENABLE_VXL CMake flag

This arrow contains the following functionality:

Bundle Adjust Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Camera Map Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Close Loops Homography Guided Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Compute Homography Overlap

Warning

doxygenfunction: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Canonical Transform Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Essential Matrix Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Fundamental Matrix Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Homography Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Estimate Similarity Transform Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image Container Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image I/O Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Match Features Constrained Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Optimize Cameras Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Vital to VXL Algorithm

Warning

doxygenfunction: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

VXL to Vital Algorithm

Warning

doxygenfunction: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Split Image Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Triangulate Landmarks Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

FFMPEG Video Input Algorithm

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image Memory Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Image Memory Chunk Class

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

_images/Sprokit.png

Sprokit

KWIVER includes a data flow architecture called Sprokit. Sprokit is a dynamic pipeline configuration and execution framework that combines all of KWIVER’s other components – VITAL, Configuration, and Arrows to create a powerful, dynamic system for expressing processing pipelines that address computer vision problems.

Getting Started with Sprokit

In computer vision applications, the interaction between data structures (expressed in KWIVER as VITAL types) and algorithms (expressed in KWIVER as Arrows) can frequently be expressed as a pipeline of processing steps:

  1. Input processing to load images or video
  2. Manipulation and/or analysis of the imagery
  3. Output of resulting imagery and/or analytics in a useful format

A graphical representation of such a pipeline might look something like this:

_images/processing_pipeline.png

The Manipulation/Analysis step, step 2, might be a collection of processing operations that build on one another to achieve some result, such as repsented in this graphical depiction of a more elaborate pipeline:

_images/complex_pipeline.png

Because this type of processing architecture is so common, KWIVER includes a data flow architecture called Sprokit. Sprokit is a dynamic pipeline configuration and execution framework that combines all of KWIVER’s other components – VITAL types, Configuration blocks, and Arrows to create a powerful, dynamic system for expressing processing pipelines that address computer vision problems.

Sprokit pipelines consist of a series of Sprokit processes that are connected together through “ports” over which various VITAL types flow. A Sprokit pipeline can be a straightforward sequence of steps as shown in the first pipline Figure or can consist of many steps arranged with various branches into a more sophisticated processing system as shown in the second pipeline figure.

A key benefit of Sprokit is that it provides algorithm-independent support for system engineering issues. Much of the difficulty translating a system such as the first figure from a conceptually simple diagram into a functioning system lies in the mundane issues of data transport, buffering, synchronization, and error handling. By providing a common representation for data (via VITAL) and processing steps (via Sprokit), KWIVER allows the developer to focus on foundational algorithmic research subject to the constraint of a well-defined system interface.

Sprokit Pipeline Example

The easiest way to understand Sprokit is to work through an example of building and executing a pipeline using existing KWIVER Arrows. For this example, we will filter object detections using the confidence scores associated with the detection and then write them back to disk. The pipeline accepts a collection of bounding boxes as inputs. Every bounding box is characterized by the coordinates for the box, the confidence score, and a class type. The VITAL type detected_object_sets is used to represent these bounding boxes in the pipeline. The plugin_explorer application can be used to help construct the pipeline. After the pipeline is defined it can then be executed using pipeline_runner. Note that during this entire exercise no code is written or compiled.

Input

The first step is to define where the inputs come from and where they are going. We’ll use KWIVER’s plugin_explorer application to identify the processes that we want. The following command:

plugin_explorer --proc all --brief

Generates the following output (abbreviated for clarity in this document):

.
.
.
Process type: image_filter                              Apply selected image filter algorithm to incoming images.
Process type: image_writer                              Write image to disk.
Process type: image_file_reader         Reads an image file given the file name.
Process type: detected_object_input     Reads detected object sets from an input file.
        Detections read from the input file are grouped into sets for each image
        and individually returned.
Process type: detected_object_output    Writes detected object sets to an output file.
        All detections are written to the same file.
Process type: detected_object_filter    Filters sets of detected objects using the detected_object_filter
        algorithm.
Process type: video_input                               Reads video files and produces sequential images with metadata per frame.
.
.
.

We see detected_object_input and use the following command:

plugin_explorer --proc detected_object_input --detail

To get the following, more detailed information about detected_object_input:

Process type: detected_object_input
        Description:       Reads detected object sets from an input file.

                        Detections read from the input file are grouped into sets for each image
                        and individually returned.

                Properties: _no_reentrant

                -- Configuration --
                Name       : file_name
                Default    :
                Description:       Name of the detection set file to read.
                Tunable    : no

                Name       : reader
                Default    :
                Description:       Algorithm type to use as the reader.
                Tunable    : no

        -- Input ports --
                No input ports

        -- Output ports --
                Name       : detected_object_set
                Data type  : kwiver:detected_object_set
                Flags      :
                Description: Set of detected objects.

                Name       : image_file_name
                Data type  : kwiver:image_file_name
                Flags      :
                Description: Name of an image file. The file name may contain leading path components.

What this tells us is that

  1. There is a detected_object_input process that takes a file_name and a reader (more on that in a moment) as a configuration parameter,
  2. That it has no input ports
  3. That it produces a detected_object_set and an image_file_name on its output ports when it runs.

The ports in a process are the points at which one process can connect to another. Input ports of one type can be connected to output ports of the same type from a an earlier process in the pipeline. This particular process is referred to as an end cap, specifcally an input end cap for the pipeline. This is because it’s function is to load data external to the Sprokit pipeline (for example from a CSV file) and present it for processing on the Sprokit pipeline. Similarly, output end caps would have no output ports but would convert their input data to some form external to the Sprokit pipeline.

Of particular interest is the reader parameter, which lets us select the particular arrow that we want to use to obtain our detected_object_set for reading.

We can use the following plugin_explorer command to see what is available for the configuration parameter:

plugin_explorer --algorithm detected_object_set_input --detail

Which results in the following output:

Plugins that implement type "detected_object_set_input"
---------------------
Info on algorithm type "detected_object_set_input" implementation "csv"
        Plugin name: csv      Version: 1.0
                        Detected object set reader using CSV format.

                         - 1: frame number
                         - 2: file name
                         - 3: TL-x
                         - 4: TL-y
                         - 5: BR-x
                         - 6: BR-y
                         - 7: confidence
                         - 8,9: class-name, score (this pair may be omitted or may repeat any
                        number of times)

                -- Configuration --
---------------------
Info on algorithm type "detected_object_set_input" implementation "kw18"
        Plugin name: kw18      Version: 1.0
                        Detected object set reader using kw18 format.

                                - Column(s) 1: Track-id
                                - Column(s) 2: Track-length (number of detections)
                                - Column(s) 3: Frame-number (-1 if not available)
                                - Column(s) 4-5: Tracking-plane-loc(x,y) (could be same as World-loc)
                                - Column(s) 6-7: Velocity(x,y)
                                - Column(s) 8-9: Image-loc(x,y)
                                - Column(s) 10-13: Img-bbox(TL_x,TL_y,BR_x,BR_y) (location of top-left &
                        bottom-right vertices)
                                - Column(s) 14: Area
                                - Column(s) 15-17: World-loc(x,y,z) (longitude, latitude, 0 - when
                        available)
                                - Column(s) 18: Timesetamp (-1 if not available)
                                - Column(s) 19: Track-confidence (-1 if not available)

                -- Configuration --
---------------------
Info on algorithm type "detected_object_set_input" implementation "simulator"
        Plugin name: simulator      Version: 1.0
                        Detected object set reader using SIMULATOR format.

                        Detection are generated algorithmicly.
                -- Configuration --
                "center_x" = "100"
                Description:       Bounding box center x coordinate.

                "center_y" = "100"
                Description:       Bounding box center y coordinate.

                "detection_class" = "detection"
                Description:       Label for detection detected object type

                "dx" = "0"
                Description:       Bounding box x translation per frame.

                "dy" = "0"
                Description:       Bounding box y translation per frame.

                "height" = "200"
                Description:       Bounding box height.

                "max_sets" = "10"
                Description:       Number of detection sets to generate.

                "set_size" = "4"
                Description:       Number of detection in a set.

                "width" = "200"
                Description:       Bounding box width.

---------------------
Info on algorithm type "detected_object_set_input" implementation "kpf_input"
        Plugin name: kpf_input      Version: 1.0
                        Detected object set reader using kpf format.
                -- Configuration --

As we can see, we have a number of choices including a CSV reader, a simulator, and some others. For this example we’ll use the CSV reader when we construct the pipeline.

Filter

Similarly, we can look at filters for detected_object_sets:

plugin_explorer --proc detected_object_input --detail

Which gives us:

Process type: detected_object_filter
Description:       Filters sets of detected objects using the detected_object_filter
                algorithm.

        Properties: _no_reentrant

        -- Configuration --
        Name       : filter
        Default    :
        Description:       Algorithm configuration subblock.
        Tunable    : no

-- Input ports --
        Name       : detected_object_set
        Data type  : kwiver:detected_object_set
        Flags      : _required
        Description: Set of detected objects.

-- Output ports --
        Name       : detected_object_set
        Data type  : kwiver:detected_object_set
        Flags      :
        Description: Set of detected objects.

And the associated Arrows:

Plugins that implement type "detected_object_filter"
---------------------
Info on algorithm type "detected_object_filter" implementation "class_probablity_filter"
        Plugin name: class_probablity_filter      Version: 1.0
                        Filters detections based on class probability.

                        This algorithm filters out items that are less than the threshold. The
                        following steps are applied to each input detected object set.

                        1) Select all class names with scores greater than threshold.

                        2) Create a new detected_object_type object with all selected class names
                        from step 1. The class name can be selected individually or with the
                        keep_all_classes option.

                        3) The input detection_set is cloned and the detected_object_type from
                        step 2 is attached.
                -- Configuration --
                "keep_all_classes" = "true"
                Description:       If this options is set to true, all classes are passed through this filter
                        if they are above the selected threshold.

                "keep_classes" = ""
                Description:       A list of class names to pass through this filter. Multiple names are
                        separated by a ';' character. The keep_all_classes parameter overrides
                        this list of classes. So be sure to set that to false if you only want the
                        listed classes.

                "threshold" = "0"
                Description:       Detections are passed through this filter if they have a selected
                        classification that is above this threshold.

We will use the class_probability_filter to only pass detections from all classes that are above a confidence value that we’ll set in our pipeline configuration file.

Output

Finally, we will select our output process, which has the following definition:

Process type: detected_object_output
 Description:       Writes detected object sets to an output file.

     All detections are written to the same file.

   Properties: _no_reentrant

   -- Configuration --
   Name       : file_name
   Default    :
   Description:       Name of the detection set file to write.
   Tunable    : no

   Name       : writer
   Default    :
   Description:       Block name for algorithm parameters. e.g. writer:type would be used to
     specify the algorithm type.
   Tunable    : no

 -- Input ports --
   Name       : detected_object_set
   Data type  : kwiver:detected_object_set
   Flags      : _required
   Description: Set of detected objects.

   Name       : image_file_name
   Data type  : kwiver:image_file_name
   Flags      :
   Description: Name of an image file. The file name may contain leading path components.

 -- Output ports --

This output process accepts a detected_object_set and image_file_name as input and writes out the result. We will look at our selection of arrows that we could use:

Plugins that implement type "detected_object_set_output"
---------------------
Info on algorithm type "detected_object_set_output" implementation "csv"
        Plugin name: csv      Version: 1.0
                        Detected object set writer using CSV format.

                         - 1: frame number
                         - 2: file name
                         - 3: TL-x
                         - 4: TL-y
                         - 5: BR-x
                         - 6: BR-y
                         - 7: confidence
                         - 8,9: class-name, score (this pair may be omitted or may repeat any
                        number of times)

                -- Configuration --
---------------------
Info on algorithm type "detected_object_set_output" implementation "kw18"
        Plugin name: kw18      Version: 1.0
                        Detected object set writer using kw18 format.

                                - Column(s) 1: Track-id
                                - Column(s) 2: Track-length (number of detections)
                                - Column(s) 3: Frame-number (-1 if not available)
                                - Column(s) 4-5: Tracking-plane-loc(x,y) (could be same as World-loc)
                                - Column(s) 6-7: Velocity(x,y)
                                - Column(s) 8-9: Image-loc(x,y)
                                - Column(s) 10-13: Img-bbox(TL_x,TL_y,BR_x,BR_y) (location of top-left &
                        bottom-right vertices)
                                - Column(s) 14: Area
                                - Column(s) 15-17: World-loc(x,y,z) (longitude, latitude, 0 - when
                        available)
                                - Column(s) 18: Timestamp (-1 if not available)
                                - Column(s) 19: Track-confidence (-1 if not available)

                -- Configuration --
                "tot_field1_ids" = ""
                Description:       Comma separated list of ids used for TOT field 1.

                "tot_field2_ids" = ""
                Description:       Comma separated list of ids used for TOT field 2.

                "write_tot" = "false"
                Description:       Write a file in the vpView TOT format alongside the computed tracks.
---------------------
Info on algorithm type "detected_object_set_output" implementation "kpf_output"
        Plugin name: kpf_output      Version: 1.0
                        Detected object set writer using kpf format.t
                -- Configuration --

In this case, we’ll select the DIVA KPF writer when we assemble our pipeline.

Pipeline

A text file is used to construct the pipeline processes, their input and output port connections, and the configuration parameters.

We’ll construct a pipeline that has the following structure based on the information we obtained from using plugin_explorer:

Here is the pipeline file that configures our selected input, filter, and output:

_images/sprokit_basic_pipeline.png

Which can be represented by the follwing pipeline file:

# --------------------------------------------------
process reader :: detected_object_input
                                file_name = sample_detected_objects.csv
                                reader:type = csv

# --------------------------------------------------
process filter :: detected_object_filter
                                filter:type = class_probablity_filter
                                filter:threshold = .5

connect from reader.detected_object_set to filter.detected_object_set

# --------------------------------------------------
process writer :: detected_object_output
                                file_name = sample_filtered_detected_objects.kpf
                                writer:type = kpf

connect from filter.detected_object_set to writer.detected_object_set

In this pipeline file we define three processes: reader, filter, and writer. We connect the detected_object_set output of reader to the detected_object_set input of filter. We configure filter to only pass detected_objects with a confidence above a threshold of 0.5 and then we pass its detected_object_set output port to our writer processes’ input port. We select a KPF writer for our writer process.

We can run the pipeline with the following command:

pipeline_runner --pipe sample_reader_filter_writer.pipe

When the pipeline runs it will read a set of detected_objects from the file sample_detected_objects.csv, filter out any that have a confidence less than 50%, and then write the remainder to a KPF file for further processing, etc.

PythonsProcesses

One of KWIVER’s great strengths (as provided by sprokit) is the ability to create hybrid pipelines which combine C++ and Python processes in the same pipeline. This greatly facilitates prototyping complex processing pipelines. To test this out we’ll use a simple process called numbers which simply generates numbers on a Sprokit port. We’ll also use a simple Python process that prints the number called kw_print_number_process the code for which can be seen in [sprokit/processes/python/kw_print_number_process.py](sprokit/processes/python/kw_print_number_process.py).

As usual, we can lean about this process with the following command:

plugin_explorer --proc kw_print_number_process -d

Which produces the following output:

Process type: kw_print_number_process
  Description: A Simple Kwiver Test Process
  Properties: _no_reentrant, _python
Configuration:
  Name       : output
  Default    : .
  Description: The path for the output file.
  Tunable    : no

Input ports:
  Name       : input
  Type       : integer
  Flags      : _required
  Description: Where numbers are read from.

Output ports:

In order to get around limitations imposed by the Python Global Interpreter Lock, we’ll use a different Sprokit scheduler for this pipeline. The pythread_per_process scheduler which does essentially what it says: it creates a Python thread for every process in the pipeline:

pipeline_runner -S pythread_per_process -p </path/to/kwiver/source>/sprokit/pipelines/number_flow_python.pipe>

The previous pipeline, the numbers will be written to an output file, this time numbers_from_python.txt

Distributed Processing

A key component required in KWIVER to enable the construction of fully elaborated computer vision systems is a strategy for multi-processing by distributing Sprokit pipelines across multiple computing nodes. This a critical requirement since modern computer vision algorithms tend to be resource hungry, especially in the case of deep learning based algorithms which require extensive GPU support to run optimally. KWIVER has been utilized in systems using the `The Robotics Operating System (ROS) <http://www.ros.org>>`_ and Apache Kafka among others.

KWIVER, however, can use a built-in mechanism for constructing multi-computer processing systems, with message passing becoming an integral part of the KWIVER framework. We have chosen to use ZeroMQ for the message passing architecture, because it readily scales from small brokerless prototype systems to more complex broker based architectures that span many dozens of communicating elements.

The current ZeroMQ system focuses on “brokerless” processing, relying on pipeline configuration settings to establish communication topologies. What this means in practice is that pipelines must be constructed in a way that “knows” where their communication partners are located in terms of networking (hostname and ports). While this is sufficient to stand up a number of interesting and useful systems it is expected that KWIVER will evolve to providing limited brokering services to enable more flexibility and dynamism when constructing KWIVER based multi-processing systems.

KWIVER’s multi-processing support is composed of two components:

  1. Serialization
  2. Transport

In keeping with KWIVER’s architecture, both of these are represented as abstractions, under which specific implementations (JSON, Protocol Buffers, ZeroMQ, ROS etc.) can be constructed.

Serialization

KWIVER’s serialization strategy is based on KWIVER’s arrows. There is a serialization arrow for each of the VITAL data types. Then there are implementations for various serialization protocols. KWIVER supports JSON based serialization and binary serialization. For binary serialization, Google’s Protocol Buffers are used. While JSON based serialization makes reading and debugging types like detected_object_set easy, binary serialization is used to serialize data heavy elements like images. As with other KWIVER arrows, providing new implementations supporting other protocols is straightforward.

Constructing a Serialization Algorithm

All serializer algorithms must be derived from the data_serializer algorithm. All derived classes must implement the deserialize() and serialize() methods, in addition to the configuration support methods.

The serialize() method converts one or more data items into a serialized byte stream. The format of the byte stream depends on the serialization_type being implemented. Examples of serialization types are json and protobuf.

The serialize() and deserialize() methods must be compatible so that the deserialize() method can take the output of the serialize() method and reproduce the original data items.

The serialize() method takes in a map of one or more named data items and the deserialize() method produces a similar map. It is up to the data_serializer implementation to define what these names are and the associated data types.

Basic one element data_serializer implementations usually do not require any configuration, but more complicated multi-input serializers can require an arbitrary amount of configuration data. These configuration parameters are supplied

Serialization and Deserialization Processes

The KWIVER serialization infrastructure is designed to allow the use of multiple cooperating Sprokit pipelines to interact with one another in a multiprocessing environment. The purpose of serialization is to package data in a format (a byte string) that can easily be transmitted or received via a transport. The purpose of the serialization and deserialization processes is to convert one or more Sprokit data ports to and from a single byte string for transmission by a transport process.

The serializer process dynamically creates serialization algorithms based on the ports being connected. A fully qualified serialization port name takes the following form:

process.<group>/<element>

On the input side of serializer process, the fully qualified name is used to group individual data elements:

connect from detected_object_reader.detected_object_set to serializer.detections/dos
connect from image_reader.image to serializer.detections/image

On the output side, the <group> portion of the name is used to connect the entire serialized set (Sprokit’s pipeline handling mechanism will insure synchronization of the elements) on the detections output port:

connect from serializer.detections to transport.serialized_message

Similarly, for a deserializer the input side uses the group name:

connect from transport.serialized_message to deserializer.detections

And the output side presents the deserialized element names:

connect from deserializer.detections/dos to detected_object_writer.dos
connect from deserializer.detections/image to image_writer.image

There are some things worth noting:

  • The serialized group name is embedded in the serailized “packet”. This allows the serializer and deserializer to validate that the the serialized output and input match up. Connecting a serializer output port group to a deserializer input port different_group will result in an error.
  • A single serializer can have individual elements connected to different input groups. This will simply create multiple group output ports. Similarly a deserializer can have multiple groups on the input side – the individual elements for both groups will appear on the output side (with the appropriate group name in the port name).
[de]serializer process details

The serializer process always requires the serialization_type config entry. The value supplied is used to select the set of data_serializer algorithms. If the type specified is json, then the data_serializer will be selected from the ‘serialize-json’ group. The list of data_serializer algorithms can be displayed with the following command

plugin_explorer –fact serialize

Transport Processes

KWIVER’s transport strategy is structured as Sprokit end-caps (the needs and requirements for the transport implementations are somewhat dependent on Sprokit’s stream processing implementation and so don’t lend themselves to implementation as KWIVER arrows). The current implementation focuses on a one-to-many and many-to-one topologies for VITAL data types.

Transport processes take a serialized message (byte buffer) and interface to a specific data transport. There are two types of transport processes, send and receive. The send type processes take the byte buffer from a serializer process and put it on the transport. The receive type processes take a message from the transport and put it on the output port to go to a deserializer process.. The port name for both types of processes is “serialized_message”.

The canonical implementation of the Sprokit transport processes is based on ZeroMQ, specifically ZeroMQ’s PUB/SUB pattern with REQ/REP synchronization.

The Sprokit ZeroMQ implementation is contained in two Sprokit processes, zmq_transport_send_process and zmq_transport_receive_process:

zmq_transport_send_process

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

zmq_transport_receive_process

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Distributed Pipelines Examples

To demonstrate the use of Sprokit’s ZeroMQ distributed processing capabilities, we’ll first need some simple Sprokit pipeline files. The first will generate some synthetic detected_object_set data, serialize it into Protocol Buffers and transmit the result with ZeroMQ. Here is a figure that illustrates the pipeline

_images/zmq_send_pipeline.png

And here is the actual .pipe file that implements it:

process sim :: detected_object_input
                                file_name = none
                                reader:type = simulator
                                reader:simulator:center_x = 100
                                reader:simulator:center_y = 100
                                reader:simulator:dx = 10
                                reader:simulator:dy = 10
                                reader:simulator:height = 200
                                reader:simulator:width = 200
                                reader:simulator:detection_class = "simulated"

# --------------------------------------------------
process ser :: serializer
                                serialization_type = protobuf

connect from sim.detected_object_set to ser.dos

# --------------------------------------------------
process zmq :: zmq_transport_send
                                port = 5560

connect from ser.dos to zmq.serialized_message

To receive the data, we’ll create another pipeline that receives the ZeroMQ data, deserializes it from the Protocol Buffer container and then writes the resulting data to a CSV file. This pipeline looks something like this:

_images/zmq_receive_pipeline.png

The actual .pipe file looks like this:

process zmq :: zmq_transport_receive
        port = 5560
        num_publishers = 1

# --------------------------------------------------
process dser :: deserializer
                                serialization_type = protobuf

connect from zmq.serialized_message to dser.dos

# --------------------------------------------------
process sink :: detected_object_output
                                file_name = received_dos.csv
                                writer:type = csv

connect from dser.dos to sink.detected_object_set

We’ll use pipeline_runner to start these pipelines. First, we’ll start the send pipeline:

pipeline_runner --pipe test_zmq_send.pipe

In a second terminal, we’ll start the reciever:

pipeline_runner --pipe test_zmq_receive.pipe

When the receiver is started, the data flow will start immediately. At the end of execution the file recevied_dos.csv should contain the transmitted, synthesized detected_object_set data.

Multiple Publishers

With the current implementation of the ZeroMQ transport and Sprokit’s dynamic configuration capabilities, we can use these pipelines to create more complex topologies as well. For example, we can set up a system with multiple publishers and a receiver that merges the results. Here is a diagram of such a topology:

_images/zmq_multi_pub.png

We can use the same .pipe files by reconfiguring the pipeline on the command line using pipeline_runner. Here’s how we’ll start the first sender. In this case we’re simply changing the detection_class configuration for the simulator so that we can identify this sender’s output in the resulting CSV file:

pipeline_runner --pipe test_zmq_send.pipe --set sim:reader:simulator:detection_class=detector_one

In another terminal we can start a second sender. In this case we also change the detection_class configuration and we change the ZeroMQ port to be two above the default port of 5560. This leaves room for the synchronization port of the first sender and sets up the two senders in the configuration expected by a multi-publisher receiver:

pipeline_runner --pipe test_zmq_send.pipe --set sim:reader:simulator:detection_class=detector_two --set zmq:port=5562

Finally, we’ll start the receiver. We’ll simply change the num_publishers parameter to 2 so that it connects to both publishers, starting at port 5560 for the first and automatically adding two to get to 5562 for the second:

pipeline_runner --pipe test_zmq_recv.pipe --set zmq:num_publishers=2
Multiple Subscribers

In a similar fashion, we can construct topologies where multiple subscribers subscribe to a single publisher. Here is a diagram of this topology:

_images/zmq_multi_sub.png

First we’ll start our publisher, reconfiguring it to expect 2 subscribers before starting:

pipeline_runner --pipe test_zmq_send.pipe  --set zmq:expected_subscribers=2

Then, we’ll start our first subscriber, changing the output file name to received_dos_one.csv:

pipeline_runner --pipe test_zmq_recv.pipe --set sink::file_name=received_dos_one.csv

Finally, we’ll start out second subscriber, this time changing the output file name to received_dos_two.csv:

pipeline_runner --pipe test_zmq_recv.pipe --set sink::file_name=received_dos_two.csv

Worked examples of these pipelines using the TMUX terminal multiplexor can be found in test_zmq_multi_pub_tmux.sh and test_zmq_multi_sub_tmus.sh in the sprokit/tests/pipelines of the KWIVER repository.

Sprokit Architecture

Sprokit is a “Stream Processing Toolkit” that provides infrastructure for chaining together algorithms into pipelines for processing streaming data sources. The most common use case of Sprokit is for video processing, but Sprokit is data type agnostic and could be used for any type of streaming data. Sprokit allows the user to dynamically connect and configure a pipeline by chaining together processing nodes called “processes” into a directed graph with data sources and sinks. Sprokit schedules the jobs to run each process and keep data flowing through pipeline. Sprokit also allows processes written in Python to be interconnected with those written in C++.

Pipeline Design

Overview

The design of the new pipeline is meant to address issues that have come up before and to add functionality that has been wanted for a while including Python support, interactive pipeline debugging, better concurrency support, and more.

Type Safety

The codebase strives for type safety where possible. This is achieved by using typedef to rename types. When applicable, typedef types also expose objects through only a shared_ptr to prevent unintentional deep copies from occurring and simplify memory management.

The use of typedef within the codebase also simplifies changing core types if necessary (e.g., replacing std::shared_ptr with a different managed pointer class).

Some of the core classes (i.e., sprokit::datum and sprokit::stamp) are immutable through their respective typedef and can only be created with static methods of the respective class which enforce that they can only be constructed in specific ways.

doxygenclass:: sprokit::datum
project:kwiver
members:
Introspection

Processes are designed to be introspected so that information about a process can be given at runtime. It also allows processes to be created at runtime and pipelines created dynamically. By abstracting out C++ types, language bindings do not need to deal with templates, custom bindings for every plugin, and other intricacies that bindings to C++ libraries usually entail.

Thread safety

Processes within the new pipeline are encouraged to be thread safe. When thread safety cannot be ensured, it must be explicitly marked. This is so that any situation where data is shared across threads where more than one thread expects to be able to modify the data is detected as an error.

Error Handling

Errors within the pipeline are indicated with exceptions. Exceptions allow the error to be handled at the appropriate level and if the error is not caught, the message will reach the user. This forces ignoring errors to be explicit since not all compilers allow decorating functions to warn when their return value is ignored.

Control Flow

The design of the ref sprokit::process class is such that the heavy lifting is done by the base class and specialized computations are handled as needed by a subclass. This allows a new process to be written with a minimum amount of boilerplate. Where special logic is required, a subclass can implement a c virtual method which can add supplemental logic to support a feature.

For example, when information about a port is requested, the ref sprokit::process::input_port_info method is called which delegates logic to the ref sprokit::process::_input_port_info method which can be overwritten. By default, it returns information about the port if it has been declared, otherwise it throws an exception that the port does not exist. To create ports on the fly, a process can reimplement ref sprokit::process::_input_port_info to create the port so that it exists and an exception is not thrown.

The rationale for not making ref sprokit::process::input_port_info c virtual is to enforce that API specifications are met. For example, when connecting edges, the main method makes sure that the edge is not c NULL and that the process has not been initialized yet.

Data Flow

Data flows within the pipeline via the ref sprokit::edge class which ensures thread-safe communication between processes. A process communicates with edges via its input and output ports. Ports are named communication sockets where edges may be connected to so that a process can send and receive data. Input ports may have at most one edge sending data to it while output ports may feed into any number of edges.

Ports

Ports are declared within a process and managed by the base ref sprokit::process class to minimize the amount of code that needs to be written to handle communication within the pipeline.

A port has a “type” associated with it which is used to detect errors when connecting incompatible ports with each other. These types are em logical types, not a type within a programming language. A c double can represent a distance or a time interval (or even a distance is a different unit!), but a port which uses a c double to a distance would have a type of c distance_in_meters, em not c double. There are two special types, one of which indicates that any type is accepted on the port and another which indicates that no data is ever expected on the port.

Ports can also have flags associated with them. Flags give extra information about the data that is expected on a port. A flag can indicate that the data on the port must be present to make any sense (either it’s required for a computation or that if the result is ignored, there’s no point in doing the computation in the first place), the data on the port should not be modified (because it is only a shallow copy and other processes modifying the data would invalidate results), or that the data for the port will be modified (used to cause errors when connected to a port with the previous flag). Flags are meant to be used to bring attention to the fact that more is happening to data that flows through the port than normal.

Packets

Each data packet within an edge is made up of two parts: a status packet and a stamp. The stamp is used to ensure that the various flows through the pipeline are synchronized.

The status packet indicates the result of the computation that creates the result available on a port. It can indicate that the computation succeeded (with the result), failed (with the error message), could not be completed for some reason (e.g., not enough data), or complete (the input data is exhausted and no more results can be made). Having a status message for each result within the pipeline allows for more fine-grained data dependencies to be made. A process which fails to get some extra data related to its main data stream (e.g., metadata on a video frame) does not have to create invalid objects nor indicate failure to other, unrelated, ports.

A stamp consists of a step count and an increment. If two stamps have the same step count. A stamp’s step count is incremented at the source for each new data element. Step counts are unitless and should only be used for ordering information. In fact, the ref sprokit::stamp interface enforces this and only provides a comparison operator between stamps. Since step counts are unitless and discrete, inserting elements into the stream requires that the step counts change.

The base ref sprokit::process class handles the common case for incoming and outgoing data. The default behavior is that if an input port is marked as being “required”, its status message is aggregated with other required inputs:

  • If a required input is complete, then the current process’ computation is considered to be complete as well.
  • Otherwise, if a required input is an error message, then the current process’ computation is considered an error due to an error as input (following the GIGO principle).
  • Otherwise, if a required input is empty, then the current process’ computation is considered empty (the computation is missing data and cannot be completed).
  • Then, since all of the required inputs are available, the stamps are checked to ensure that they are on the same step count.

If custom logic is required to manage ports or data, this control flow can be disabled piecemeal and handled manually. The status can check can be disabled on a per-process basis so that it can be managed in a special way.

Pipeline Execution

The execution of a pipeline is separate from the construction and verification. This allows specialized schedulers to be used in situations where some resource is constrained (one scheduler to keep memory usage low, another to minimize CPU contention, another for an I/O-heavy pipeline, and others).

Pipeline Declaration Files

Pipeline declaration files allow a pipeline to be loaded from a plain text description. They provide all of the information necessary to create and run a pipeline and may be composed of files containing pipeline specification information that are included into the main file

The ‘#’ character is used to introduce a comment. All text from the ‘#’ to the end of the line are considered comments.

A pipeline declaration file is made up of the following sections:

  • Configuration Section
  • Process Definition Section
  • Connection Definition
Configuration Entries

Configuration entries are statements which add an entry to the configuration block for the pipeline. The general form for a configuration entry is a key / value pair, as shown below:

key = value

The key specification can be hierarchical and be specified with multiple components separated by a ‘:’ character. Key components are described by the following regular expression [a-zA-Z0-9_-]+.

key:component:list = value

Each leading key component (the name before the ‘:’) establishes a subblock in the configuration. These subblocks are used to group configuration entries for different sections of the application.

The value for a configuration entry is the character string that follows the ‘=’ character. The value has leading and trailing blanks removed. Embedded blanks are preserved without the addition of enclosing quotes. If quotes are used in the value portion of the configuration entry, they are not processed in any way and remain part of the value string. That is, if you put quotes in the value component of a configuration entry, they will be there when the value is retrieved in the program.

Configuration items can have their values replaced or modified by subsequent configuration statements, unless the read-only flag is specified (see below).

The value component may also contain macro references that are replaced with other text as the config entry is processed. Macros can be used to dynamically adapt a config entry to its operating environment without requiring the entry to be hand edited. The macro substitution feature is described below.

Configuration entry attributes

Configuration keys may have attributes associated with them. These attributes are specified immediately after the configuration key. All attributes are enclosed in a single set of brackets (e.g. []). If a configuration key has more than one attribute they are all in the same set of brackets separated by a comma.

Currently the only understood flags are:

flag{ro} Marks the configuration value as read-only. A configuration that is marked as read only may not have the value subsequently modified in the pipeline file or programatically by the program.

flag{tunable} Marks the configuration value as tunable. A configuration entry that is marked as tunable can have a new value presented to the process during a reconfigure operation.

Examples:

foo[ro] = bar # results in foo = "bar"
foo[ro, tunable] = bar
Macro Substitution

The values for configuration elements can be composed from static text in the config file and dynamic text supplied by macro providers. The format of a macro specification is $TYPE{name} where TYPE is the name of macro provider and name requests a particular value to be supplied. The name entry is specific to each provider.

The text of the macro specification is only replaced. Any leading or trailing blanks will remain. If the value of a macro is not defined, the macro specification will be replaced with the null string.

Macro Providers

The macro providers are listed below and discussed in the following sections.

  • LOCAL - locally defined values
  • ENV - program environment
  • CONFIG - values from current config block
  • SYSENV - system environment
LOCAL Macro Provider

This macro provider supplies values that have been stored previously in the config file. Local values are specified in the config file using the “:=” operator. For example the config entry mode := online makes $LOCAL{mode} available in subsequent configuration entries.:

mode := online
...
config_file = data/$LOCAL{mode}/model.dat

This type of macro definition can appear anywhere in a config file and becomes available for use on the next line. The current block context has no effect on the name of the macro.

ENV Macro Provider

This macro provides gives access to the current program environment. The values of environment variables such as “HOME” can be used by specifying $ENV{HOME} in the config file.

CONFIG Macro Provider

This macro provider gives access to previously defined configuration entries. For example:

config foo
  bar = baz

makes the value available by specifying $CONFIG{foo:bar} to following lines in the config file as shown below.:

value = mode-$CONFIG{foo:bar}ify
SYSENV Macro Provider

This macro provider supports the following symbols derived from the current host operating system environment.

  • curdir - current working directory
  • homedir - current user’s home directory
  • pid - current process id
  • numproc - number of processors in the current system
  • totalvirtualmemory - number of KB of total virtual memory
  • availablevirtualmemory - number of KB of available virtual memory
  • totalphysicalmemory - number of KB of total physical memory
  • availablephysicalmemory - number of KB of physical virtual memory
  • hostname - name of the host computer
  • domainname - name of the computer in the domain
  • osname - name of the host operating system
  • osdescription - description of the host operating system
  • osplatform - platorm name (e.g. x86-64)
  • osversion - version number for the host operating system
  • iswindows - TRUE if running on Windows system
  • islinux - TRUE if running on Linux system
  • isapple - TRUE if running on Apple system
  • is64bits - TRUE if running on a 64 bit machine
Block Specification

In some cases the fully qualified configuration key can become long and unwieldy. The block directive can be used to establish a configuration context to be applied to the enclosed configuration entries. block alg Starts a block with the alg block name and all entries within the block will have alg: prepended to the entry name.:

block alg
   mode = red      # becomes alg:mode = red
endblock

Blocks can be nested to an arbitrary depth with each providing context for the enclosed entries.:

block foo
  block bar:fizzle
    mode = yellow     # becomes foo:bar:fizzle:mode = yellow
  endblock
endblock
Including Files

The include directive logically inserts the contents of the specified file into the current file at the point of the include directive. Include files provide an easy way to break up large configurations into smaller reusable pieces.

include filename

The filename specified may contain references to an ENV or SYSENV macro. The macro reference is expanded before the file is located. No other macro providers are supported.

If the file name is not an absolute path, it is located by scanning the current config search path. The manner in which the config include path is created is described in a following section. If the file is still not found, the stack of include directories is scanned from the current include file back to the initial config file. Macro substitution, as described below, is performed on the file name string before the searching is done.

Block specifications and include directives can be used together to build reusable and shareable configuration snippets.:

block main
  block alg_one
    include alg_foo.config
  endblock

  block alg_two
    include alg_foo.config
  endblock
endblock

In this case the same configuration structure can be used in two places in the overall configuration.

Include files can be nested to an arbitrary depth.

Relativepath Modifier

There are cases where an algorithm needs an external file containing binary data that is tied to a specific configuration. These data files are usually stored with the main configuration files. Specifying a full hard coded file path is not portable between different users and systems.

The solution is to specify the location of these external files relative to the configuration file and use the relativepath modifier construct a full, absolute path at run time by prepending the configuration file directory path to the value. The relativepath keyword appears before the key component of a configuration entry.:

relativepath data_file = ../data/online_dat.dat

If the current configuration file is /home/vital/project/config/blue/foo.config, the resulting config entry for data_file will be /home/vital/project/config/blue/../data/online.dat

The relativepath modifier can be applied to any configuration entry, but it only makes sense to use it with relative file specifications.

Configuration Section

Configuration sections introduce a named configuration subblock that can provide configuration entries to runtime components or make the entries available through the $CONFIG{key} macro.

The configuration blocks for _pipeline and _scheduler are described below.

The form of a configuration section is as follows:

config <key-path> <line-end>
      <config entries>
Examples

todo Explain examples.:

config common
  uncommon = value
  also:uncommon = value

Creates configuration items:

common:uncommon = value
common:also:uncommon = value

Another example:

config a:common:path
  uncommon:path:to:key = value
  other:uncommon:path:to:key = value

Creates configuration items:

a:common:path:uncommon:path:to:key = value
a:common:path:other:uncommon:path:to:key = value
Process definition Section

A process block adds a process to the pipeline with optional configuration items. Processes are added as an instance of registered process type with the specified name. Optional configuration entries can follow the process declaration. These configuration entries are made available to that process when it is started.

Specification

A process specification is as follows. An instance of the specified process-type is created and is available in the pipeline under the specified process-name:

process <process-name> :: <process-type>
  <config entries>
Examples

An instance of my_processes_type is created and named my_process:

process my_process :: my_process_type

process another_process
  :: awesome_process
     some_param = some_value
Non-blocking processes

A process can be declared as non-blocking which indicates that input data is to be dropped if the input port queues are full. This is useful for real-time processing where a process is the bottleneck.

The non-blocking behaviour is a process attribute that is specified as a configuration entryin the pipeline file. The syntax for this configuration option is as follows:

process blocking_process
  :: awesome_process
   _non_blocking = 2

The special “_non_blocking” configuration entry specifies the capacity of all incoming edges to the process. When the edges are full, the input data are dropped. The input edge size is set to two entries in the above example. This capacity specification overrides all other edge capacity controls for this process only.

Static port values

Declaring a port static allows a port to be supplied a constant value from the config in addition to the option of it being connected in the normal way. Ports are declared static when they are created by a process by adding the c flag_input_static option to the c declare_input_port() method.

When a port is declared as static, the value at this port may be supplied via the configuration using the special static/ prefix before the port name. The syntax for specifying static values is:

:static/<port-name> <key-value>

If a port is connected and also has a static value configured, the configured static value is ignored.

The following is an example of configuring a static port value.:

process my_process
  :: my_process_type
     static/port = value
Instrumenting Processes

A process may request to have its instrumentation calls handled by an external provider. This is done by adding the _instrumentation block to the process config.:

process my_process
  :: my_process_type
  block _instrumentation
     type = foo
     block  foo
       file = output.dat
       buffering = optimal
     endblock
  endblock

The type parameter specifies the instrumentation provider, “foo” in this case. If the special name “none” is specified, then no instrumentation provider is loaded. This is the same as not having the config block present. The remaining configuration items that start with “_instrumentation:<type>” are considered configuration data for the provider and are passed to the provider after it is loaded.

Connection Definition

A connection definition specifies how the output ports from a process are connected to the input ports of another process. These connections define the data flow of the pipeline graph.:

connect from <process-name> . <input-port-name> to <process-name> . <output-port-name>
Examples

This example connects a timestamp port to two different processes.:

connect from input.timestamp      to   stabilize  .timestamp
connect from input.timestamp      to   writer     .timestamp
Pipeline Edge Configuration

A pipeline edge is a connection between two ports. The behaviour of the edges can be configured if the defaults are not appropriate. Note that defining a process as non-blocking overrides all input edge configurations for that process only.

Pipeline edges are configured in a hierarchical manner. First there is the _pipeline:_edge config block which establishes the basic configuration for all edges. This can be specified as follows:

config _pipeline:_edge
       capacity = 30     # set default edge capacity

Currently the only attribute that can be configured is “capacity”.

The config for the edge type overrides the default configuration so that edges used to transport specific data types can be configured as a group. This edge type configuration is specified as follows:

config _pipeline:_edge_by_type
       image_container:capacity = 30
       timestamp:capacity = 4

Where image_container and timestamp are the type names used when defining process ports.

After this set of configurations have been applied, edges can be more specifically configured based on their connection description. An edge connection is described in the config as follows:

config _pipeline:_edge_by_conn
        <process>:<up_down>:<port> <value>

Where:

  • <process> is the name of the process that is being connected.
  • <up_down> is the direction of the connection. This is either “up” or “down”.
  • <port> is the name of the port.

For the example, the following connection:

connect from input.timestamp
        to   stabilize.timestamp

can be described as follows:

config _pipeline:_edge_by_conn
   input:up:timestamp:capacity = 20
   stabilize:down:timestamp:capacity = 20

Both of these entries refer to the same edge, so in real life, you would only need one.

These different methods of configuring pipeline edges are applied in a hierarchial manner to allow general defaults to be set, and overridden using more specific edge attributes. This order is default capacity, edge by type, then edge by connection.

Scheduler configuration

Normally the pipeline is run with a default scheduler that assigns one thread to each process. A different scheduler can be specified in the config file. Configuration parameters for the scheduler can be specified in this section also.:

config _scheduler
   type = <scheduler-type>

Available scheduler types are:

  • sync - Runs the pipeline synchronously in one thread.
  • thread_per_process - Runs the pipeline using one thread per process.
  • pythread_per_process - Runs the pipeline using one thread per process and supports processes written in python.
  • thread_pool - Runs pipeline with a limited number of threads (not implemented).

The pythread_per_process is the only scheduler that supports processes written python.

Scheduler specific configuration entries are in a sub-block named as the scheduler. Currently these schedulers do not have any configuration parameters, but when they do, they would be configured as shown in the following example.

Example

The pipeline scheduler can selected with the pipeline configuration as follows:

config _scheduler
 type = thread_per_process

 # Configuration for thread_per_process scheduler
 thread_per_process:foo = bar

 # Configuration for sync scheduler
 sync:foos = bars
Clusters Definition File

A cluster is a collection of processes which can be treated as a single process for connection and configuration purposes. Clusters are defined in a slngle file with one cluster per file.

A cluster definition starts with the cluster keyword followed by the name of the cluster. A documentation section must follow the cluster name definition. Here is where you describe the purpose and function of the cluster in addition to any other important information about limitations or assumptions. Comments start with -- and continue to the end of the line.

The body of the cluster definition is made up of three types of declarations that may appear multiple times and in any order. These are:

  • config specifier
  • input mapping
  • output mapping

A description is required after each one of these entries. The description starts with “–” and continues to the end of the line. These descriptions are different from typical comments you would put in a pipe file in that they are associated with the cluster elements and serve as user documentation for the cluster.

After the cluster has been defined, the constituent processes are defined. These processes are contained within the cluster and can be interconnected in any valid configuration.

config specifier

A configuration specification defines a configuration key with a value that is bound to the cluster. These configuration items are available for use within the cluster definition file and are referenced as <cluster-name>:<config-key>:

cluster_key = value
-- Describe configuration entry
Input mapping

The input mapping specification creates an input port on the cluster and defines how it is connected to a process (or processes) within the cluster. When a cluster is instantiated in a pipeline, connections can be made to these ports.:

imap from cport
     to   proc1.port
     to   proc2.port
-- Describe input port expected data type and
-- all other interesting details.
Output mapping

The output mappinc specification creates an output port on the cluster and defines how the data is supplied. When a cluster is instantiated, these output ports can be connected to downstream processes in the usual manner.:

omap from proc2.oport   to  cport
-- Describe output port data type and
-- all other interesting details.

An example cluster definition is as follows:

cluster <name>
  -- Description fo cluster.
  -- May extend to multiple lines.

  cluster_key = value
  -- Describe the config entry here.

  imap from cport
       to   proc1.port
       to   proc2.port
  -- Describe input port. Input port can be mapped
  -- to multiple process ports

  omap from proc2.oport    to  coport
  -- describe output port

The following is a more complicated example:

cluster configuration_provide
  -- Multiply a number by a constant factor.

  factor = 20
  -- The constant factor to multiply by.

  imap from factor  to   multiply.factor1
  -- The factor to multiply by.

  omap from multiply.product    to   product
  -- The product.

 # The following defines the contained processes
process const
  :: const_number
  value[ro]= $CONFIG{configuration_provide:factor}

process multiply
  :: multiplication

connect from const.number        to   multiply.factor2

Process

any_source
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
data _any _required The data.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: any_source
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.data
         to   <downstream-proc>.data
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

collate
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: collate
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

compute_homography
Configuration
Input Ports
Port name Data Type Flags Description
feature_track_set kwiver:feature_track_set _required Set of feature tracks.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Port name Data Type Flags Description
homography_src_to_ref kwiver:s2r_homography (none) Source image to ref image homography.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: compute_homography
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.feature_track_set
         to   <upstream-proc>.feature_track_set
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.homography_src_to_ref
         to   <downstream-proc>.homography_src_to_ref
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

compute_stereo_depth_map
Configuration
Variable Default Tunable Description
computer (no default value) NO Algorithm configuration subblock
Input Ports
Port name Data Type Flags Description
left_image kwiver:image _required Single frame left image.
right_image kwiver:image _required Single frame right image.
Output Ports
Port name Data Type Flags Description
depth_map kwiver:image (none) Depth map stored in image form.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: compute_stereo_depth_map
# Algorithm configuration subblock
  computer = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.left_image
         to   <upstream-proc>.left_image
connect from <this-proc>.right_image
         to   <upstream-proc>.right_image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.depth_map
         to   <downstream-proc>.depth_map
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

const
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
const _none _const, _required The port with the const flag set.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: const
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.const
         to   <downstream-proc>.const
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

const_number
Configuration
Variable Default Tunable Description
value 0 NO The value to start counting at.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
number integer _required Where the numbers will be available.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: const_number
# The value to start counting at.
  value = 0
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.number
         to   <downstream-proc>.number
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

data_dependent
Configuration
Variable Default Tunable Description
reject false NO Whether to reject type setting requests or not.
set_on_configure true NO Whether to set the type on configure or not.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
output _data_dependent (none) An output port with a data dependent type
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: data_dependent
# Whether to reject type setting requests or not.
  reject = false
# Whether to set the type on configure or not.
  set_on_configure = true
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.output
         to   <downstream-proc>.output
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

detect_features
Configuration
Variable Default Tunable Description
feature_detector (no default value) NO Algorithm configuration subblock.
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Port name Data Type Flags Description
feature_set kwiver:feature_set (none) Set of detected image features.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: detect_features
# Algorithm configuration subblock.
  feature_detector = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.feature_set
         to   <downstream-proc>.feature_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

detected_object_filter
Configuration
Variable Default Tunable Description
filter (no default value) NO Algorithm configuration subblock.
Input Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set _required Set of detected objects.
Output Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set (none) Set of detected objects.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: detected_object_filter
# Algorithm configuration subblock.
  filter = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.detected_object_set
         to   <upstream-proc>.detected_object_set
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.detected_object_set
         to   <downstream-proc>.detected_object_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

detected_object_input
Configuration
Variable Default Tunable Description
file_name (no default value) NO Name of the detection set file to read.
reader (no default value) NO Algorithm type to use as the reader.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set (none) Set of detected objects.
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: detected_object_input
# Name of the detection set file to read.
  file_name = <value>
# Algorithm type to use as the reader.
  reader = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.detected_object_set
         to   <downstream-proc>.detected_object_set
connect from <this-proc>.image_file_name
         to   <downstream-proc>.image_file_name
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

detected_object_output
Configuration
Variable Default Tunable Description
file_name (no default value) NO Name of the detection set file to write.
writer (no default value) NO Block name for algorithm parameters. e.g. writer:type would be used to specify
the algorithm type.
Input Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set _required Set of detected objects.
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: detected_object_output
# Name of the detection set file to write.
  file_name = <value>
# Block name for algorithm parameters. e.g. writer:type would be used to
# specify the algorithm type.
  writer = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.detected_object_set
         to   <upstream-proc>.detected_object_set
connect from <this-proc>.image_file_name
         to   <upstream-proc>.image_file_name
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

distribute
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: distribute
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

draw_detected_object_boxes
Configuration
Variable Default Tunable Description
alpha_blend_prob true YES If true, those who are less likely will be more transparent.
clip_box_to_image false YES If this option is set to true, the bounding box is clipped to the image bounds.
custom_class_color (no default value) YES List of class/thickness/color seperated by semicolon. For example: person/3/255
0 0;car/2/0 255 0. Color is in RGB.
default_color 0 0 255 YES The default color for a class (RGB).
default_line_thickness 1 YES The default line thickness for a class, in pixels.
draw_text true YES If this option is set to true, the class name is drawn next to the detection.
select_classes ALL YES List of classes to display, separated by a semicolon. For example:
person;car;clam
text_scale 0.4 YES Scaling for the text label.
text_thickness 1.0 YES Thickness for text
threshold -1 YES min threshold for output (float). Detections with confidence values below this
value are not drawn.
Input Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set _required Set of detected objects.
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: draw_detected_object_boxes
# If true, those who are less likely will be more transparent.
  alpha_blend_prob = true
# If this option is set to true, the bounding box is clipped to the image
# bounds.
  clip_box_to_image = false
# List of class/thickness/color seperated by semicolon. For example:
# person/3/255 0 0;car/2/0 255 0. Color is in RGB.
  custom_class_color = <value>
# The default color for a class (RGB).
  default_color = 0 0 255
# The default line thickness for a class, in pixels.
  default_line_thickness = 1
# If this option is set to true, the class name is drawn next to the detection.
  draw_text = true
# List of classes to display, separated by a semicolon. For example:
# person;car;clam
  select_classes = *ALL*
# Scaling for the text label.
  text_scale = 0.4
# Thickness for text
  text_thickness = 1.0
# min threshold for output (float). Detections with confidence values below
# this value are not drawn.
  threshold = -1
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.detected_object_set
         to   <upstream-proc>.detected_object_set
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

draw_detected_object_set
Configuration
Variable Default Tunable Description
draw_algo (no default value) NO Name of drawing algorithm config block.
Input Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set _required Set of detected objects.
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: draw_detected_object_set
# Name of drawing algorithm config block.
  draw_algo = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.detected_object_set
         to   <upstream-proc>.detected_object_set
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

draw_tracks
Configuration
Input Ports
Port name Data Type Flags Description
feature_track_set kwiver:feature_track_set _required Set of feature tracks.
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
output_image kwiver:image (none) Image with tracks
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: draw_tracks
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.feature_track_set
         to   <upstream-proc>.feature_track_set
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.output_image
         to   <downstream-proc>.output_image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

duplicate
Configuration
Variable Default Tunable Description
copies 1 NO The number of copies to make of each input.
Input Ports
Port name Data Type Flags Description
input _flow_dependent/tag _required Arbitrary input data.
Output Ports
Port name Data Type Flags Description
duplicate _flow_dependent/tag _required, _shared Duplicated input data.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: duplicate
# The number of copies to make of each input.
  copies = 1
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.input
         to   <upstream-proc>.input
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.duplicate
         to   <downstream-proc>.duplicate
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

expect
Configuration
Variable Default Tunable Description
expect (no default value) NO The expected value.
expect_key false NO Whether to expect a key or a value.
tunable (no default value) YES A tunable value.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
dummy _none (none) A dummy port.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: expect
# The expected value.
  expect = <value>
# Whether to expect a key or a value.
  expect_key = false
# A tunable value.
  tunable = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.dummy
         to   <downstream-proc>.dummy
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

extract_descriptors
Configuration
Input Ports
Port name Data Type Flags Description
feature_set kwiver:feature_set _required Set of detected image features.
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Port name Data Type Flags Description
descriptor_set kwiver:descriptor_set (none) Set of descriptors.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: extract_descriptors
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.feature_set
         to   <upstream-proc>.feature_set
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.descriptor_set
         to   <downstream-proc>.descriptor_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

feature_matcher
Configuration
Input Ports
Port name Data Type Flags Description
descriptor_set kwiver:descriptor_set _required Set of descriptors.
feature_set kwiver:feature_set _required Set of detected image features.
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Port name Data Type Flags Description
feature_track_set kwiver:feature_track_set (none) Set of feature tracks.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: feature_matcher
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.descriptor_set
         to   <upstream-proc>.descriptor_set
connect from <this-proc>.feature_set
         to   <upstream-proc>.feature_set
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.feature_track_set
         to   <downstream-proc>.feature_track_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

feedback
Configuration
Input Ports
Port name Data Type Flags Description
input __feedback _nodep, _required A port which accepts this process’ output.
Output Ports
Port name Data Type Flags Description
output __feedback _required A port which outputs data for this process’ input.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: feedback
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.input
         to   <upstream-proc>.input
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.output
         to   <downstream-proc>.output
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

flow_dependent
Configuration
Variable Default Tunable Description
reject false NO Whether to reject type setting requests or not.
Input Ports
Port name Data Type Flags Description
input _flow_dependent/tag (none) An input port with a flow dependent type.
Output Ports
Port name Data Type Flags Description
output _flow_dependent/tag (none) An output port with a flow dependent type
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: flow_dependent
# Whether to reject type setting requests or not.
  reject = false
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.input
         to   <upstream-proc>.input
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.output
         to   <downstream-proc>.output
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

frame_list_input
This process uses the following vital algorithms:
Configuration
Variable Default Tunable Description
frame_time 0.03333333 NO Inter frame time in seconds. The generated timestamps will have the specified
number of seconds in the generated timestamps for sequential frames. This can be
used to simulate a frame rate in a video stream application.
image_list_file (no default value) NO Name of file that contains list of image file names. Each line in the file
specifies the name of a single image file.
image_reader (no default value) NO Algorithm configuration subblock
path (no default value) NO Path to search for image file. The format is the same as the standard path
specification, a set of directories separated by a colon (‘:’)
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
timestamp kwiver:timestamp (none) Timestamp for input image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: frame_list_input
# Inter frame time in seconds. The generated timestamps will have the specified
# number of seconds in the generated timestamps for sequential frames. This can
# be used to simulate a frame rate in a video stream application.
  frame_time = 0.03333333
# Name of file that contains list of image file names. Each line in the file
# specifies the name of a single image file.
  image_list_file = <value>
# Algorithm configuration subblock
  image_reader = <value>
# Path to search for image file. The format is the same as the standard path
# specification, a set of directories separated by a colon (':')
  path = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
connect from <this-proc>.image_file_name
         to   <downstream-proc>.image_file_name
connect from <this-proc>.timestamp
         to   <downstream-proc>.timestamp
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

image_file_reader
Configuration
Variable Default Tunable Description
error_mode fail NO How to handle file not found errors. Options are ‘abort’ and ‘skip’. Specifying
’fail’ will cause an exception to be thrown. The ‘pass’ option will only log a
warning and wait for the next file name.
image_reader (no default value) NO Algorithm configuration subblock.
path (no default value) NO Path to search for image file. The format is the same as the standard path
specification, a set of directories separated by a colon (‘:’)
Input Ports
Port name Data Type Flags Description
image_file_name kwiver:image_file_name _required Name of an image file. The file name may contain leading path components.
Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
timestamp kwiver:timestamp (none) Timestamp for input image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: image_file_reader
# How to handle file not found errors. Options are 'abort' and 'skip'.
# Specifying 'fail' will cause an exception to be thrown. The 'pass' option
# will only log a warning and wait for the next file name.
  error_mode = fail
# Algorithm configuration subblock.
  image_reader = <value>
# Path to search for image file. The format is the same as the standard path
# specification, a set of directories separated by a colon (':')
  path = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image_file_name
         to   <upstream-proc>.image_file_name
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
connect from <this-proc>.image_file_name
         to   <downstream-proc>.image_file_name
connect from <this-proc>.timestamp
         to   <downstream-proc>.timestamp
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

image_filter
Configuration
Variable Default Tunable Description
filter (no default value) NO Algorithm configuration subblock
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: image_filter
# Algorithm configuration subblock
  filter = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

image_object_detector
This process uses the following vital algorithms:
Configuration
Variable Default Tunable Description
detector (no default value) NO Algorithm configuration subblock
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set (none) Set of detected objects.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: image_object_detector
# Algorithm configuration subblock
  detector = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.detected_object_set
         to   <downstream-proc>.detected_object_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

image_viewer
Configuration
Variable Default Tunable Description
annotate_image false NO Add frame number and other text to display.
footer (no default value) NO Footer text for image display. Displayed centered at bottom of image.
header (no default value) NO Header text for image display.
pause_time 0 NO Interval to pause between frames. 0 means wait for keystroke, Otherwise interval
is in seconds (float)
title Display window NO Display window title text..
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp (none) Timestamp for input image.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: image_viewer
# Add frame number and other text to display.
  annotate_image = false
# Footer text for image display. Displayed centered at bottom of image.
  footer = <value>
# Header text for image display.
  header = <value>
# Interval to pause between frames. 0 means wait for keystroke, Otherwise
# interval is in seconds (float)
  pause_time = 0
# Display window title text..
  title = Display window
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

image_writer
Configuration
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp (none) Timestamp for input image.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: image_writer
# Template for generating output file names. The template is interpreted as a
# printf format with one format specifier to convert an integer increasing
# image number. The image file type is determined by the file extension and the
# concrete writer selected.
  file_name_template = image%04d.png
# Config block name to configure algorithm. The algorithm type is selected with
# "image_writer:type". Specific writer parameters depend on writer type
# selected.
  image_writer = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

kw_archive_writer
Configuration
Variable Default Tunable Description
base_filename (no default value) NO Base file name (no extension) for KWA component files
compress_image true NO Whether to compress image data stored in archive
mission_id (no default value) NO Mission ID to store in archive
output_directory . NO Output directory where KWA will be written
separate_meta true NO Whether to write separate .meta file
static/corner_points (no default value) NO A default value to use for the ‘corner_points’ port if it is not connected.
static/gsd (no default value) NO A default value to use for the ‘gsd’ port if it is not connected.
stream_id (no default value) NO Stream ID to store in archive
Input Ports
Port name Data Type Flags Description
corner_points corner_points _static Four corner points for image in lat/lon units, ordering ul ur lr ll.
gsd kwiver:gsd _static GSD for image in meters per pixel.
homography_src_to_ref kwiver:s2r_homography _required Source image to ref image homography.
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: kw_archive_writer
# Base file name (no extension) for KWA component files
  base_filename = <value>
# Whether to compress image data stored in archive
  compress_image = true
# Mission ID to store in archive
  mission_id = <value>
# Output directory where KWA will be written
  output_directory = .
# Whether to write separate .meta file
  separate_meta = true
# A default value to use for the 'corner_points' port if it is not connected.
  static/corner_points = <value>
# A default value to use for the 'gsd' port if it is not connected.
  static/gsd = <value>
# Stream ID to store in archive
  stream_id = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.corner_points
         to   <upstream-proc>.corner_points
connect from <this-proc>.gsd
         to   <upstream-proc>.gsd
connect from <this-proc>.homography_src_to_ref
         to   <upstream-proc>.homography_src_to_ref
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

multiplication
Configuration
Input Ports
Port name Data Type Flags Description
factor1 integer _required The first factor to multiply.
factor2 integer _required The second factor to multiply.
Output Ports
Port name Data Type Flags Description
product integer _required Where the product will be available.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: multiplication
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.factor1
         to   <upstream-proc>.factor1
connect from <this-proc>.factor2
         to   <upstream-proc>.factor2
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.product
         to   <downstream-proc>.product
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

multiplier_cluster
Configuration
Variable Default Tunable Description
factor (no default value) NO The value to start counting at.
Input Ports
Port name Data Type Flags Description
factor integer _required The factor to multiply.
Output Ports
Port name Data Type Flags Description
product integer _required Where the product will be available.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: multiplier_cluster
# The value to start counting at.
  factor = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.factor
         to   <upstream-proc>.factor
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.product
         to   <downstream-proc>.product
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

mutate
Configuration
Input Ports
Port name Data Type Flags Description
mutate _any _mutable, _required The port with the mutate flag set.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: mutate
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.mutate
         to   <upstream-proc>.mutate
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

numbers
Configuration
Variable Default Tunable Description
end 100 NO The value to stop counting at.
start 0 NO The value to start counting at.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
number integer _required Where the numbers will be available.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: numbers
# The value to stop counting at.
  end = 100
# The value to start counting at.
  start = 0
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.number
         to   <downstream-proc>.number
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

orphan
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: orphan
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

orphan_cluster
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: orphan_cluster
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

output_adapter
Configuration
Variable Default Tunable Description
wait_on_queue_full TRUE NO When the output queue back to the application is full and there is more data to
add, should new data be dropped or should the pipeline block until the data can
be delivered. The default action is to wait until the data can be delivered.
Input Ports

There are no input ports for this process.

Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: output_adapter
# When the output queue back to the application is full and there is more data
# to add, should new data be dropped or should the pipeline block until the
# data can be delivered. The default action is to wait until the data can be
# delivered.
  wait_on_queue_full = TRUE
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

pass
Configuration
Input Ports
Port name Data Type Flags Description
pass _flow_dependent/pass _required The datum to pass.
Output Ports
Port name Data Type Flags Description
pass _flow_dependent/pass _required The passed datum.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: pass
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.pass
         to   <upstream-proc>.pass
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.pass
         to   <downstream-proc>.pass
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

read_d_vector
Configuration
Input Ports
Port name Data Type Flags Description
d_vector kwiver:d_vector _required Vector of doubles from descriptor
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: read_d_vector
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.d_vector
         to   <upstream-proc>.d_vector
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

refine_detections
Configuration
Variable Default Tunable Description
refiner (no default value) NO Algorithm configuration subblock
Input Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set _required Set of detected objects.
image kwiver:image (none) Single frame image.
Output Ports
Port name Data Type Flags Description
detected_object_set kwiver:detected_object_set (none) Set of detected objects.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: refine_detections
# Algorithm configuration subblock
  refiner = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.detected_object_set
         to   <upstream-proc>.detected_object_set
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.detected_object_set
         to   <downstream-proc>.detected_object_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

shared
Configuration
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
shared _none _required, _shared The port with the shared flag set.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: shared
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.shared
         to   <downstream-proc>.shared
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

sink
Configuration
Input Ports
Port name Data Type Flags Description
sink _any _required The data to ignore.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: sink
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.sink
         to   <upstream-proc>.sink
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

split_image
Configuration
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
Output Ports
Port name Data Type Flags Description
left_image kwiver:image (none) Single frame left image.
right_image kwiver:image (none) Single frame right image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: split_image
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.left_image
         to   <downstream-proc>.left_image
connect from <this-proc>.right_image
         to   <downstream-proc>.right_image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

stabilize_image
Configuration
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp _required Timestamp for input image.
Output Ports
Port name Data Type Flags Description
homography_src_to_ref kwiver:s2r_homography (none) Source image to ref image homography.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: stabilize_image
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.homography_src_to_ref
         to   <downstream-proc>.homography_src_to_ref
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

tagged_flow_dependent
Configuration
Input Ports
Port name Data Type Flags Description
tagged_input _flow_dependent/tag (none) A tagged input port with a flow dependent type.
untagged_input _flow_dependent/ (none) An untagged input port with a flow dependent type.
Output Ports
Port name Data Type Flags Description
tagged_output _flow_dependent/tag (none) A tagged output port with a flow dependent type
untagged_output _flow_dependent/ (none) An untagged output port with a flow dependent type
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: tagged_flow_dependent
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.tagged_input
         to   <upstream-proc>.tagged_input
connect from <this-proc>.untagged_input
         to   <upstream-proc>.untagged_input
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.tagged_output
         to   <downstream-proc>.tagged_output
connect from <this-proc>.untagged_output
         to   <downstream-proc>.untagged_output
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

take_number
Configuration
Input Ports
Port name Data Type Flags Description
number integer _required Where numbers are read from.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: take_number
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.number
         to   <upstream-proc>.number
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

take_string
Configuration
Input Ports
Port name Data Type Flags Description
string string _required Where strings are read from.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: take_string
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.string
         to   <upstream-proc>.string
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

template
Configuration
Variable Default Tunable Description
footer bottom NO Footer text for image display. Displayed centered at bottom of image.
gsd 3.14159 NO Meters per pixel scaling.
header top NO Header text for image display.
Input Ports
Port name Data Type Flags Description
image kwiver:image _required Single frame image.
timestamp kwiver:timestamp (none) Timestamp for input image.
Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: template
# Footer text for image display. Displayed centered at bottom of image.
  footer = bottom
# Meters per pixel scaling.
  gsd = 3.14159
# Header text for image display.
  header = top
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image
         to   <upstream-proc>.image
connect from <this-proc>.timestamp
         to   <upstream-proc>.timestamp
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

track_descriptor_input
Configuration
Variable Default Tunable Description
file_name (no default value) NO Name of the track descriptor set file to read.
reader (no default value) NO Algorithm type to use as the reader.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
track_descriptor_set kwiver:track_descriptor_set (none) Set of track descriptors.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: track_descriptor_input
# Name of the track descriptor set file to read.
  file_name = <value>
# Algorithm type to use as the reader.
  reader = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image_file_name
         to   <downstream-proc>.image_file_name
connect from <this-proc>.track_descriptor_set
         to   <downstream-proc>.track_descriptor_set
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

track_descriptor_output
Configuration
Variable Default Tunable Description
file_name (no default value) NO Name of the track descriptor set file to write.
writer (no default value) NO Block name for algorithm parameters. e.g. writer:type would be used to specify
the algorithm type.
Input Ports
Port name Data Type Flags Description
image_file_name kwiver:image_file_name (none) Name of an image file. The file name may contain leading path components.
track_descriptor_set kwiver:track_descriptor_set _required Set of track descriptors.
Output Ports
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: track_descriptor_output
# Name of the track descriptor set file to write.
  file_name = <value>
# Block name for algorithm parameters. e.g. writer:type would be used to
# specify the algorithm type.
  writer = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# This process will consume the following input ports
connect from <this-proc>.image_file_name
         to   <upstream-proc>.image_file_name
connect from <this-proc>.track_descriptor_set
         to   <upstream-proc>.track_descriptor_set
The following Output ports will need to be set
# This process will produce the following output ports
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

tunable
Configuration
Variable Default Tunable Description
non_tunable (no default value) NO The non-tunable output.
tunable (no default value) YES The tunable output.
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
non_tunable string (none) The non-tunable output.
tunable string (none) The tunable output.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: tunable
# The non-tunable output.
  non_tunable = <value>
# The tunable output.
  tunable = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.non_tunable
         to   <downstream-proc>.non_tunable
connect from <this-proc>.tunable
         to   <downstream-proc>.tunable
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

video_input
This process uses the following vital algorithms:
Configuration
Variable Default Tunable Description
frame_time 0.03333333 NO Inter frame time in seconds. If the input video stream does not supply frame
times, this value is used to create a default timestamp. If the video stream has
frame times, then those are used.
video_filename (no default value) NO Name of video file.
video_reader (no default value) NO Name of video input algorithm. Name of the video reader algorithm plugin is
specified as video_reader:type = <algo-name>
Input Ports

There are no input ports for this process.

Output Ports
Port name Data Type Flags Description
image kwiver:image (none) Single frame image.
timestamp kwiver:timestamp (none) Timestamp for input image.
video_metadata kwiver:video_metadata (none) Video metadata vector for a frame.
Pipefile Usage

The following sections describe the blocks needed to use this process in a pipe file.

Pipefile block
# ================================================================
process <this-proc>
  :: video_input
# Inter frame time in seconds. If the input video stream does not supply frame
# times, this value is used to create a default timestamp. If the video stream
# has frame times, then those are used.
  frame_time = 0.03333333
# Name of video file.
  video_filename = <value>
# Name of video input algorithm.  Name of the video reader algorithm plugin is
# specified as video_reader:type = <algo-name>
  video_reader = <value>
# ================================================================
Process connections
The following Input ports will need to be set
# There are no input port's for this process
The following Output ports will need to be set
# This process will produce the following output ports
connect from <this-proc>.image
         to   <downstream-proc>.image
connect from <this-proc>.timestamp
         to   <downstream-proc>.timestamp
connect from <this-proc>.video_metadata
         to   <downstream-proc>.video_metadata
Class Description

Warning

doxygenclass: Cannot find file: /home/docs/checkouts/readthedocs.org/user_builds/kwiver/checkouts/dev-rtd_fix/doc/manuals/_build/xml/index.xml

Tools

KWIVER provides commands line tools that help explore and leverage the use of KWIVER and its capabilities without requiring any code to written. plugin_exploroer allows the exploration of KWIVER’s plugin space, including the available Arrows and Sprokit processes. pipeline_runner runs Sprokit pipelines and provides way to dynacially configure them.

Plugin Explorer

Plugin explorer is the tool to use to explore the available plugins. Since kwiver relies heavily on dynamically loaded content through plugins, this tool is essential to determine what is available and to help diagnose plugin problems.

The -h option is used to display the built in help for the command line options. Before we delve into the full set of options, there are two common uses: locating processes and locating algorithms. This can be done with the --proc opt option, for processes of the --algo opt option for algorithms. The opt argument can be all to list all plugins of that type. If the option is not all, then it is interpreted as a regular-expression and all plugins of the selected type that match are listed.

Processes

For example, it you are looking for processes that provides input, you could enter the following query that looks for any process with ‘input’ in its type.:

$ plugin_explorer --proc input

Generates the following output:

Plugins that implement type "sprokit::process"

---------------------
Process type: frame_list_input
Description:  Reads a list of image file names and generates stream of images and
              associated time stamps

---------------------
Process type: detected_object_input
Description:  Reads detected object sets from an input file. Detections read from the
              input file are grouped into sets for each image and individually returned.

---------------------
Process type: video_input
Description:  Reads video files and produces sequential images with metadata per frame.

---------------------
Process type: track_descriptor_input
Description:  Reads track descriptor sets from an input file.

---------------------
Process type: input_adapter
Description:  Source process for pipeline. Pushes data items into pipeline ports. Ports
              are dynamically created as needed based on connections specified in the
              pipeline file.

After you have determined which process meets your needs, more detailed information can be displayed by adding the -d option.:

$ plugin_explorer --proc video_input -d

Plugins that implement type "sprokit::process"

---------------------
Process type: video_input
Description:       Reads video files and produces sequential images with metadata per frame.

  Properties: _no_reentrant

  -- Configuration --
  Name       : frame_time
  Default    : 0.03333333
  Description: Inter frame time in seconds. If the input video stream does not supply
               frame times, this value is used to create a default timestamp. If the
               video stream has frame times, then those are used.
  Tunable    : no

  Name       : video_filename
  Default    :
  Description: Name of video file.
  Tunable    : no

  Name       : video_reader
  Default    :
  Description: Name of video input algorithm.  Name of the video reader algorithm plugin
               is specified as video_reader:type = <algo-name>
  Tunable    : no

-- Input ports --
  No input ports

-- Output ports --
  Name       : image
  Data type  : kwiver:image
  Flags      :
  Description: Single frame image.

  Name       : timestamp
  Data type  : kwiver:timestamp
  Flags      :
  Description: Timestamp for input image.

  Name       : metadata
  Data type  : kwiver:metadata
  Flags      :
  Description: Video metadata vector for a frame.

The detailed information display shows the configuration parameters, input and output ports.

Algorithms

Algorithms can be querried in a similar manner. The algorithm query lists all implementations for the selected algorithm type. We can bet a list brief list of all algorithm type names that contain “input by using the following command.:

$ plugin_explorer --algo input -b

Plugins that implement type "detected_object_set_input"
    Algorithm type: detected_object_set_input   Implementation: kw18
    Algorithm type: detected_object_set_input   Implementation: csv

Plugins that implement type "video_input"
    Algorithm type: video_input   Implementation: filter
    Algorithm type: video_input   Implementation: image_list
    Algorithm type: video_input   Implementation: pos
    Algorithm type: video_input   Implementation: split
    Algorithm type: video_input   Implementation: vidl_ffmpeg

You can see that two algorithm types were found and their different implementations are listed. We can further examine what implementations are available for the “video_input” with the folloeing command.:

$ plugin_explorer --algo video_input

The result is a brief listing of all algorithms that implement the “video_input” algorithm.:

Plugins that implement type "video_input"

---------------------
Info on algorithm type "video_input" implementation "filter"
  Plugin name: filter      Version: 1.0
      A video input that calls another video input and filters the output on
      frame range and other parameters.

---------------------
Info on algorithm type "video_input" implementation "image_list"
  Plugin name: image_list      Version: 1.0
      Read a list of images from a list of file names and presents them in the
      same way as reading a video.  The actual algorithm to read an image is
      specified in the "image_reader" config block.  Read an image list as a
      video stream.

---------------------
Info on algorithm type "video_input" implementation "pos"
  Plugin name: pos      Version: 1.0
      Read video metadata in AFRL POS format. The algorithm takes configuration
      for a directory full of images and an associated directory name for the
      metadata files. These metadata files have the same base name as the image
      files. Each metadata file is associated with the image file.

---------------------
Info on algorithm type "video_input" implementation "split"
  Plugin name: split      Version: 1.0
      Coordinate two video readers. One reader supplies the image/data stream.
      The other reader supplies the metadata stream.

---------------------
Info on algorithm type "video_input" implementation "vidl_ffmpeg"
  Plugin name: vidl_ffmpeg      Version: 1.0
      Use VXL (vidl with FFMPEG) to read video files as a sequence of images.

A detailed description of an algorithm can be generated by adding the -d option to the command line. The detailed output for one of the algorithms is shown below:

---------------------
Info on algorithm type "video_input" implementation "vidl_ffmpeg"
Plugin name: vidl_ffmpeg      Version: 1.0
    Use VXL (vidl with FFMPEG) to read video files as a sequence of images.
  -- Configuration --
  "absolute_time_source" = "none"
  Description:       List of sources for absolute frame time information. This entry specifies
    a comma separated list of sources that are tried in order until a valid
    time source is found. If an absolute time source is found, it is used in
    the output time stamp. Absolute times are derived from the metadata in the
    video stream. Valid source names are "none", "misp", "klv0601", "klv0104".
    Where:
        none - do not supply absolute time
        misp - use frame embedded time stamps.
        klv0601 - use klv 0601 format metadata for frame time
        klv0104 - use klv 0104 format metadata for frame time
    Note that when "none" is found in the list no further time sources will be
    evaluated, the output timestamp will be marked as invalid, and the
    HAS_ABSOLUTE_FRAME_TIME capability will be set to false.  The same
    behavior occurs when all specified sources are tried and no valid time
    source is found.

  "start_at_frame" = "0"
  Description:       Frame number (from 1) to start processing video input. If set to zero,
    start at the beginning of the video.

  "stop_after_frame" = "0"
  Description:       Number of frames to supply. If set to zero then supply all frames after
    start frame.

  "time_scan_frame_limit" = "100"
  Description:       Number of frames to be scanned searching input video for embedded time. If
    the value is zero, the whole video will be scanned.

Other Plugin Types

A summary of all plugin types that are available can be displayed using the --summary command line option.:

----Summary of plugin types
  38 types of plugins registered.
      1 plugin(s) that create "sprokit::process_instrumentation"
      53 plugin(s) that create "sprokit::process"
      3 plugin(s) that create "sprokit::scheduler"
      1 plugin(s) that create "analyze_tracks"
      3 plugin(s) that create "bundle_adjust"
      5 plugin(s) that create "close_loops"
      1 plugin(s) that create "compute_ref_homography"
      1 plugin(s) that create "convert_image"
      11 plugin(s) that create "detect_features"
      1 plugin(s) that create "detected_object_filter"
      2 plugin(s) that create "detected_object_set_input"
      2 plugin(s) that create "detected_object_set_output"
      1 plugin(s) that create "draw_detected_object_set"
      1 plugin(s) that create "draw_tracks"
      1 plugin(s) that create "dynamic_configuration"
      2 plugin(s) that create "estimate_canonical_transform"
      1 plugin(s) that create "estimate_essential_matrix"
      2 plugin(s) that create "estimate_fundamental_matrix"
      2 plugin(s) that create "estimate_homography"
      1 plugin(s) that create "estimate_similarity_transform"
      9 plugin(s) that create "extract_descriptors"
      1 plugin(s) that create "feature_descriptor_io"
      2 plugin(s) that create "filter_features"
      1 plugin(s) that create "filter_tracks"
      1 plugin(s) that create "formulate_query"
      2 plugin(s) that create "image_io"
      2 plugin(s) that create "image_object_detector"
      1 plugin(s) that create "initialize_cameras_landmarks"
      5 plugin(s) that create "match_features"
      2 plugin(s) that create "optimize_cameras"
      1 plugin(s) that create "refine_detections"
      2 plugin(s) that create "split_image"
      1 plugin(s) that create "track_descriptor_set_output"
      1 plugin(s) that create "track_features"
      1 plugin(s) that create "train_detector"
      2 plugin(s) that create "triangulate_landmarks"
      5 plugin(s) that create "video_input"
  137 total plugins

This summary output can be used to get an overview of what algorithm types are available.

A full list of the options

A full list of all program options can be displayed with the -h command line option.:

$ plugin_explorer -h
Usage for plugin_explorer
  Version: 1.1

 --algo opt        Display only algorithm type plugins. If type is specified
                   as "all", then all algorithms are listed. Otherwise, the
                   type will be treated as a regexp and only algorithm types
                   that match the regexp will be displayed.

 --algorithm opt   Display only algorithm type plugins. If type is specified
                   as "all", then all algorithms are listed. Otherwise, the
                   type will be treated as a regexp and only algorithm types
                   that match the regexp will be displayed.

 --all             Display all plugin types

 --attrs           Display raw attributes for plugins without calling any
                   category specific formatting

 --brief           Generate brief display

 --detail          Display detailed information about plugins

 --fact opt        Only display factories whose interface type matches
                   specified regexp

 --factory opt     Only display factories whose interface type matches
                   specified regexp

 --files           Display list of loaded files

 --filter opts     Filter factories based on attribute name and value. Only
                   two fields must follow: <attr-name> <attr-value>

 --fmt opt         Generate display using alternative format, such as 'rst' or
                   'pipe'

 --help            Display usage information

 --hidden          Display hidden properties and ports

 --load opt        Load only specified plugin file for inspection. No other
                   plugins are loaded.

 --mod             Display list of loaded modules

 --path            Display plugin search path

 --proc opt        Display only sprokit process type plugins. If type is
                   specified as "all", then all processes are listed.
                   Otherwise, the type will be treated as a regexp and only
                   processes names that match the regexp will be displayed.

 --process opt     Display only sprokit process type plugins. If type is
                   specified as "all", then all processes are listed.
                   Otherwise, the type will be treated as a regexp and only
                   processes names that match the regexp will be displayed.

 --scheduler       Displat scheduler type plugins

 --sep-proc-dir opt  Generate .rst output for processes as separate files in
                     specified directory.

 --summary         Display summary of all plugin types

 --type opt        Only display factories whose instance name matches the
                   specified regexp

 --version         Display program version

 -I opt            Add directory to plugin search path

 -b                Generate brief display

 -d                Display detailed information about plugins

 -h                Display usage information

 -v                Display program version

Debugging the Plugin Loading Process

There are times when an expected plugin is not being found. The plugin_explorer provides several options to assist in determining what may be the problem. A plugin file may contain more than one plugin. A common problem with loading plugins which results in the plugin not being loaded is unresolved external references. A warning message is displayed when the program starts indicating a problem loading a plugin and indicating the problem. In this case, the plugin is not loaded and not available for use.

The --files option is used to display a list of all plugin files that have been found and successfully loaded.:

$ plugin_explorer --files

---- Files Successfully Opened
 /disk2/projects/KWIVER/build/lib/modules/instrumentation_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_ceres_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_core_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_darknet_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_ocv_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_proj_plugin.so
 /disk2/projects/KWIVER/build/lib/modules/kwiver_algo_vxl_plugin.so
 /disk2/projects/KWIVER/build/lib/sprokit/kwiver_processes.so
 /disk2/projects/KWIVER/build/lib/sprokit/kwiver_processes_adapter.so
 /disk2/projects/KWIVER/build/lib/sprokit/kwiver_processes_ocv.so
 /disk2/projects/KWIVER/build/lib/sprokit/kwiver_processes_vxl.so
 /disk2/projects/KWIVER/build/lib/sprokit/modules_python.so
 /disk2/projects/KWIVER/build/lib/sprokit/processes_clusters.so
 /disk2/projects/KWIVER/build/lib/sprokit/processes_examples.so
 /disk2/projects/KWIVER/build/lib/sprokit/processes_flow.so
 /disk2/projects/KWIVER/build/lib/sprokit/schedulers.so
 /disk2/projects/KWIVER/build/lib/sprokit/schedulers_examples.so
 /disk2/projects/KWIVER/build/lib/sprokit/template_processes.so

If a file was expected to be loaded and is not in the list, then it is possible that the directory containing the file was not in the loading path. The set of directories that are scanned for loadable plugins can be displayed with the path command line option.:

$ plugin_explorer --path

---- Plugin search path
   /disk2/projects/KWIVER/build/lib/modules
   /disk2/projects/KWIVER/build/lib/sprokit

   /usr/local/lib/sprokit
   /usr/local/lib/modules

Additional directories can be supplied to the plugin_explorer using the -I dir command line option.

Pipeline Runner

Tutorials

The following links describe a set of kwiver tutorials. All the source code mentioned here is provided by the repository.

Visit the repository on how to get and build the KWIVER code base.

Ensure you select the KWIVER_ENABLE_EXAMPLES option during CMake configuration. This will create a kwiver_examples executable that you use to execute and step any code in the example library. The kwiver_examples executable is made up of multiple cpp files. Each file is designed to demonstrait a particular feature in kwiver. Each file will provide a single entry method for execution. Each of these entry methods are called from the kwiver_examples main.cpp file. This main method is intended for you to be able to select and step specific methods by commenting out other methods.

As always, we would be happy to hear your comments and receive your contributions on any tutorial.

Basic Image and Video

Simple Image

The following pipeline will take in a set of images and display them in a window.

Setup

The pipefile associated with this tutorial is located in the <kwiver build directory>examples/pipelines/image_display.pipe You will need to have KWIVER_ENABLE_EXAMPLES turned on during CMake configuration of kwiver to get this file. There is nothing more that will need to be done to execute this pipe file. You can edit the <kwiver build directory>examples/pipelines/image_list.txt if you want to add new images to be viewed.

Execution

Run the following command from the kwiver buildbin directory (bin/release on windows) Relativly point to the darknet_image.pipe or darknet_video.pipe file like this:

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\image_display.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/image_display.pipe
Process Graph

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: frame_list_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/frame_list_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image_file_name" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];

}

"input_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];

}

Simple Video

The following pipeline will take in a video file and play it in a window.

Setup

The pipefile associated with this tutorial is located in the <kwiver build directory>examples/pipelines/video_display.pipe You will need to have KWIVER_ENABLE_EXAMPLES turned on during CMake configuration of kwiver to get this file. There is nothing more that will need to be done to execute this pipe file. You can edit the edit the pipe file if you want to change the video file to be viewed.

Execution

Run the following command from the kwiver buildbin directory (bin/release on windows) Relativly point to the darknet_image.pipe or darknet_video.pipe file like this:

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\video_display.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/video_display.pipe
Process Graph

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: video_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/video_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];
"input_output_video_metadata" [label="video_metadata\n:: kwiver:video_metadata",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_video_metadata" [arrowhead=none,color=black];

}

"input_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];

}

Images and video are the most fundamental data needed for computer vision. The following tutorials will demonstrate the basic functionality provided in kwiver associated with getting image and video data into the framework.

The basic image types and algorithms are defined here

The kwiver_examples file source/examples/cpp/how_to_part_01_images.cpp constains code associated with these types and algorithms. This file demonstrates instantiating and executing various algorithms to load, view, and get data from image and video files on disk.

The following example sprokit pipelines are provided to demonstrait using these algorithms and types in a streaming process.

Image Display A pipe that loads and displays several images
Video Display A pipe that loads and displays a video file

Detection Types and Algorithms

Example Detection

These pipelines features the example_detection algorithm in kwiver_algo_core. This algorithm simply takes in a set of images or a video and generates dummy detections for each image/frame. Then the detections boxes are drawn on the frame and displayed in a window. It is a good example for how to use detection data types in kwiver.

Setup

The pipefiles associated with this tutorial are <kwiver build directory>examples/pipelines/example_detector_on_image.pipe and <kwiver build directory>examples/pipelines/example_detector_on_video.pipe You will need to have KWIVER_ENABLE_EXAMPLES turned on during CMake configuration of kwiver to get this file. There is nothing more that will need to be done to execute this pipe file. You can edit the edit the example_detector_on_video pipe file if you want to change the video file to be viewed, or the <kwiver build directory>examples/pipelines/image_list.txt if you want to change images to be viewed.

Execution

Run the following command from the kwiver buildbin directory (bin/release on windows) Relativly point to the darknet_image.pipe or darknet_video.pipe file like this:

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\example_detector_on_image.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/example_detector_on_image.pipe

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\example_detector_on_video.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/example_detector_on_video.pipe
Process Graph

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

example_detector_on_image
strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_detector" {
color=lightgray;

"detector_main" [label=<<u>detector<br/>:: image_object_detector</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_object_detector.html"];

"detector_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"detector_input_image" -> "detector_main" [arrowhead=none,color=black];

"detector_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output__heartbeat" [arrowhead=none,color=black];
"detector_output_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output_detected_object_set" [arrowhead=none,color=black];

}

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_draw" {
color=lightgray;

"draw_main" [label=<<u>draw<br/>:: draw_detected_object_boxes</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/draw_detected_object_boxes.html"];

"draw_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"draw_input_detected_object_set" -> "draw_main" [arrowhead=none,color=black];
"draw_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_input_image" -> "draw_main" [arrowhead=none,color=black];

"draw_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output__heartbeat" [arrowhead=none,color=black];
"draw_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output_image" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: frame_list_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/frame_list_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image_file_name" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];

}

"detector_output_detected_object_set" -> "draw_input_detected_object_set" [minlen=1,color=black,weight=1];
"draw_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "detector_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "draw_input_image" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];

}
example_detector_on_video
strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_detector" {
color=lightgray;

"detector_main" [label=<<u>detector<br/>:: image_object_detector</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_object_detector.html"];

"detector_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"detector_input_image" -> "detector_main" [arrowhead=none,color=black];

"detector_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output__heartbeat" [arrowhead=none,color=black];
"detector_output_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output_detected_object_set" [arrowhead=none,color=black];

}

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_draw" {
color=lightgray;

"draw_main" [label=<<u>draw<br/>:: draw_detected_object_boxes</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/draw_detected_object_boxes.html"];

"draw_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"draw_input_detected_object_set" -> "draw_main" [arrowhead=none,color=black];
"draw_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_input_image" -> "draw_main" [arrowhead=none,color=black];

"draw_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output__heartbeat" [arrowhead=none,color=black];
"draw_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output_image" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: video_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/video_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];
"input_output_video_metadata" [label="video_metadata\n:: kwiver:video_metadata",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_video_metadata" [arrowhead=none,color=black];

}

"detector_output_detected_object_set" -> "draw_input_detected_object_set" [minlen=1,color=black,weight=1];
"draw_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "detector_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "draw_input_image" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];

}

Hough Detection

This pipelines features the hough_circle_detection algorithm in kwiver_algo_ocv. This algorithm simply takes in a set of images and detectes any circles. Then the detections boxes are drawn on the frame and displayed in a window.

Setup

The pipefile associated with this tutorial are <kwiver build directory>examples/pipelines/hough_detector.pipe You will need to have KWIVER_ENABLE_EXAMPLES turned on during CMake configuration of kwiver to get this file. There is nothing more that will need to be done to execute this pipe file. You can edit the edit the example_detector_on_video pipe file if you want to change the video file to be viewed, You can edit the <kwiver build directory>examples/pipelines/hough_detector_images.txt if you want to add new images to be used.

Execution

Run the following command from the kwiver buildbin directory (bin/release on windows) Relativly point to the darknet_image.pipe or darknet_video.pipe file like this:

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\hough_detector.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/hough_detector.pipe
Process Graph

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_detector" {
color=lightgray;

"detector_main" [label=<<u>detector<br/>:: image_object_detector</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_object_detector.html"];

"detector_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"detector_input_image" -> "detector_main" [arrowhead=none,color=black];

"detector_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output__heartbeat" [arrowhead=none,color=black];
"detector_output_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"detector_main" -> "detector_output_detected_object_set" [arrowhead=none,color=black];

}

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_draw" {
color=lightgray;

"draw_main" [label=<<u>draw<br/>:: draw_detected_object_boxes</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/draw_detected_object_boxes.html"];

"draw_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"draw_input_detected_object_set" -> "draw_main" [arrowhead=none,color=black];
"draw_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_input_image" -> "draw_main" [arrowhead=none,color=black];

"draw_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output__heartbeat" [arrowhead=none,color=black];
"draw_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output_image" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: frame_list_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/frame_list_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image_file_name" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];

}

"detector_output_detected_object_set" -> "draw_input_detected_object_set" [minlen=1,color=black,weight=1];
"draw_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "detector_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "draw_input_image" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];

}

Darknet Detection

The following pipelines will take in a set of images or a video file. Each frame will be evaluated by the Darknet yolo algorithm with a weight file that was trained on the virat data set. This weight file will identify any ‘person’ or ‘vehicle’ objects in the image. The detections will then be drawn on the input image and displayed to the user, and written to disk.

Setup

In order to execute pipeline files, follow these steps to set up KWIVER

In order to run the pipelines associated with this tutorial you will need to download the associated data package. The download process is done via targets created in the build process. In a bash terminal in your KWIVER build directory, make the following targets:

make external_darknet_example
make setup_darknet_example

If you are using Visual Studio, manually build the external_darknet_example project, followed by the setup_darknet_example project.

This will pull, place, and configure all the data associated with thise exampe into <your KWIVER build directory>/examples/pipeline/darknet folder

The following files will be in the <build directory>/examples/pipelines/darknet folder:

  • images - Directory containing images used in this example
  • models - Directory containing configuration and weight files needed by Darknet
  • output - Directory where new images will be placed when the pipeline executes
  • video - Directory containing the video used in this example
  • configure.cmake - CMake script to set configure *.in files specific to your system
  • darknet_image.pipe - The pipe file to run Darknet on the provided example images
  • darknet_image.pipe.in - The pipe file to be configured to run on your system
  • darknet_video.pipe - The pipe file to run Darknet on the provided example video
  • darknet_video.pipe.in - The pipe file to be configured to run on your system
  • image_list.txt - The images to be used by the darknet_image.pipe file
  • image_list.txt.in - The list file to be configured to run on your system
  • readme.txt - This tutorial supersedes content in this file
Execution

Run the following command from the kwiver buildbin directory (bin/release on windows) Relativly point to the darknet_image.pipe or darknet_video.pipe file like this:

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\darknet\darknet_image.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/darknet/darknet_image.pipe

# Windows Example :
pipeline_runner -p ..\..\examples\pipelines\darknet\darknet_video.pipe
# Linux Example :
./pipeline_runner -p ../examples/pipelines/darknet/darknet_video.pipe

NOTE, you will need to supply a video file for the darknet_video pipe at this time. We will update the zip contents ASAP.

The darknet_image.pipe file will put all generated output to the examples/pipelines/darknet/output/images

The darknet_video.pipe file will put all generated output to the examples/pipelines/darknet/output/video

Image Detection
Process Graph
darknet_image

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_disp" {
color=lightgray;

"disp_main" [label=<<u>disp<br/>:: image_viewer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_viewer.html"];

"disp_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"disp_input_image" -> "disp_main" [arrowhead=none,color=black];
"disp_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"disp_input_timestamp" -> "disp_main" [arrowhead=none,color=black];

"disp_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"disp_main" -> "disp_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_draw" {
color=lightgray;

"draw_main" [label=<<u>draw<br/>:: draw_detected_object_boxes</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/draw_detected_object_boxes.html"];

"draw_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"draw_input_detected_object_set" -> "draw_main" [arrowhead=none,color=black];
"draw_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_input_image" -> "draw_main" [arrowhead=none,color=black];

"draw_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output__heartbeat" [arrowhead=none,color=black];
"draw_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output_image" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: frame_list_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/frame_list_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image_file_name" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];

}

subgraph "cluster_write" {
color=lightgray;

"write_main" [label=<<u>write<br/>:: image_writer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_writer.html"];

"write_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"write_input_image" -> "write_main" [arrowhead=none,color=black];
"write_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"write_input_timestamp" -> "write_main" [arrowhead=none,color=black];

"write_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"write_main" -> "write_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2" {
color=lightgray;

"yolo_v2_main" [label=<<u>yolo_v2<br/>:: image_object_detector</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_object_detector.html"];

"yolo_v2_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_input_image" -> "yolo_v2_main" [arrowhead=none,color=black];

"yolo_v2_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_main" -> "yolo_v2_output__heartbeat" [arrowhead=none,color=black];
"yolo_v2_output_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_main" -> "yolo_v2_output_detected_object_set" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2_csv_writer" {
color=lightgray;

"yolo_v2_csv_writer_main" [label=<<u>yolo_v2_csv_writer<br/>:: detected_object_output</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/detected_object_output.html"];

"yolo_v2_csv_writer_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_input_detected_object_set" -> "yolo_v2_csv_writer_main" [arrowhead=none,color=black];
"yolo_v2_csv_writer_input_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_input_image_file_name" -> "yolo_v2_csv_writer_main" [arrowhead=none,color=black];

"yolo_v2_csv_writer_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_main" -> "yolo_v2_csv_writer_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2_kw18_writer" {
color=lightgray;

"yolo_v2_kw18_writer_main" [label=<<u>yolo_v2_kw18_writer<br/>:: detected_object_output</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/detected_object_output.html"];

"yolo_v2_kw18_writer_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_input_detected_object_set" -> "yolo_v2_kw18_writer_main" [arrowhead=none,color=black];
"yolo_v2_kw18_writer_input_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_input_image_file_name" -> "yolo_v2_kw18_writer_main" [arrowhead=none,color=black];

"yolo_v2_kw18_writer_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_main" -> "yolo_v2_kw18_writer_output__heartbeat" [arrowhead=none,color=black];

}

"draw_output_image" -> "disp_input_image" [minlen=1,color=black,weight=1];
"draw_output_image" -> "write_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "yolo_v2_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "draw_input_image" [minlen=1,color=black,weight=1];
"input_output_image_file_name" -> "yolo_v2_kw18_writer_input_image_file_name" [minlen=1,color=black,weight=1];
"input_output_image_file_name" -> "yolo_v2_csv_writer_input_image_file_name" [minlen=1,color=black,weight=1];
"input_output_timestamp" -> "disp_input_timestamp" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "draw_input_detected_object_set" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "yolo_v2_kw18_writer_input_detected_object_set" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "yolo_v2_csv_writer_input_detected_object_set" [minlen=1,color=black,weight=1];

}
darknet_video

The following image displays the pipeline graph. Each process is linked to its associated definition page to learn more about it and the algorithms it uses.

strict digraph "unnamed" {
clusterrank=local;

subgraph "cluster_draw" {
color=lightgray;

"draw_main" [label=<<u>draw<br/>:: draw_detected_object_boxes</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/draw_detected_object_boxes.html"];

"draw_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"draw_input_detected_object_set" -> "draw_main" [arrowhead=none,color=black];
"draw_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_input_image" -> "draw_main" [arrowhead=none,color=black];

"draw_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output__heartbeat" [arrowhead=none,color=black];
"draw_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"draw_main" -> "draw_output_image" [arrowhead=none,color=black];

}

subgraph "cluster_input" {
color=lightgray;

"input_main" [label=<<u>input<br/>:: video_input</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/video_input.html"];


"input_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output__heartbeat" [arrowhead=none,color=black];
"input_output_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_image" [arrowhead=none,color=black];
"input_output_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_timestamp" [arrowhead=none,color=black];
"input_output_video_metadata" [label="video_metadata\n:: kwiver:video_metadata",shape=none,height=0,width=0,fontsize=12];
"input_main" -> "input_output_video_metadata" [arrowhead=none,color=black];

}

subgraph "cluster_write" {
color=lightgray;

"write_main" [label=<<u>write<br/>:: image_writer</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_writer.html"];

"write_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"write_input_image" -> "write_main" [arrowhead=none,color=black];
"write_input_timestamp" [label="timestamp\n:: kwiver:timestamp",shape=none,height=0,width=0,fontsize=12];
"write_input_timestamp" -> "write_main" [arrowhead=none,color=black];

"write_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"write_main" -> "write_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2" {
color=lightgray;

"yolo_v2_main" [label=<<u>yolo_v2<br/>:: image_object_detector</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/image_object_detector.html"];

"yolo_v2_input_image" [label="image\n:: kwiver:image",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_input_image" -> "yolo_v2_main" [arrowhead=none,color=black];

"yolo_v2_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_main" -> "yolo_v2_output__heartbeat" [arrowhead=none,color=black];
"yolo_v2_output_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_main" -> "yolo_v2_output_detected_object_set" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2_csv_writer" {
color=lightgray;

"yolo_v2_csv_writer_main" [label=<<u>yolo_v2_csv_writer<br/>:: detected_object_output</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/detected_object_output.html"];

"yolo_v2_csv_writer_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_input_detected_object_set" -> "yolo_v2_csv_writer_main" [arrowhead=none,color=black];
"yolo_v2_csv_writer_input_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_input_image_file_name" -> "yolo_v2_csv_writer_main" [arrowhead=none,color=black];

"yolo_v2_csv_writer_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_csv_writer_main" -> "yolo_v2_csv_writer_output__heartbeat" [arrowhead=none,color=black];

}

subgraph "cluster_yolo_v2_kw18_writer" {
color=lightgray;

"yolo_v2_kw18_writer_main" [label=<<u>yolo_v2_kw18_writer<br/>:: detected_object_output</u>>,shape=ellipse,rank=same,fontcolor=blue,fontsize=16,href="../sprokit/processes/detected_object_output.html"];

"yolo_v2_kw18_writer_input_detected_object_set" [label="detected_object_set\n:: kwiver:detected_object_set",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_input_detected_object_set" -> "yolo_v2_kw18_writer_main" [arrowhead=none,color=black];
"yolo_v2_kw18_writer_input_image_file_name" [label="image_file_name\n:: kwiver:image_file_name",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_input_image_file_name" -> "yolo_v2_kw18_writer_main" [arrowhead=none,color=black];

"yolo_v2_kw18_writer_output__heartbeat" [label="_heartbeat\n:: _none",shape=none,height=0,width=0,fontsize=12];
"yolo_v2_kw18_writer_main" -> "yolo_v2_kw18_writer_output__heartbeat" [arrowhead=none,color=black];

}

"draw_output_image" -> "write_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "yolo_v2_input_image" [minlen=1,color=black,weight=1];
"input_output_image" -> "draw_input_image" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "draw_input_detected_object_set" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "yolo_v2_kw18_writer_input_detected_object_set" [minlen=1,color=black,weight=1];
"yolo_v2_output_detected_object_set" -> "yolo_v2_csv_writer_input_detected_object_set" [minlen=1,color=black,weight=1];

}

Object dectection is the first step in tracking and identifying an activity. The following tutorials will demonstrate the basic functionality provided in kwiver associated with detecting objets in images and video.

The basic detection types and algorithms are defined here

The kwiver_examples file source/examples/cpp/how_to_part_02_detections.cpp constains code associated with these types and algorithms. This example demonstrates instantiating and executing various detections algorithms on images and video.

The following example sprokit pipelines are provided to demonstrait using these algorithms and types in a streaming process.

Example Detection A very basic implementation of the dection algorithm
Hough Detection Detect circles in images using a hough detector
Darknet Detection Object detection using the Darnket library

Tracking Types and Algorithms

Coming Soon!

Activity Types and Algorithms

Coming Soon!

Extending Kwiver

This section discusses the various ways KWIVER can be extended with vital types, algorithms (Arrows) and processes.

Creating a new Algorithm

Adding Algorithm Implementations

How to configure an Algorithm

How to Instantiate an Algorithm

How to Make a Sprokit Process

How To Make a Pipeline

Logging Guidelines

Thje following are the available log levels and guidnace on which level applies in a given situation.

FATAL

This should generally only be used for recording a failure that prevents the system starting, i.e. the system is completely unusable. It is also possible that errors during operation will also render the system unusable.

ERROR

Records that something went wrong, i.e. some sort of failure occurred, and either:

  • The system was not able to recover from the error, or
  • The system was able to recover, but at the expense of losing some

information or failing to honour a request.

  • This should be immediately brought to the attention of an operator. Or

to rephrase it, if your error does not need immediate investigation by an operator, then it isn’t an error.

  • To permit monitoring tools to watch the log files for ERRORs and

WARNings is crucial that:

These get logged:

  • Sufficient information is provided to identify the cause of the

problem

  • The logging is done in a standard way, which lends itself to automatic monitoring.
  • For example, if the error is caused by a configuration failure, the

configuration filename should be provided (especially if you have more than one file, yuck), as well as the property causing the problem.

WARN

A WARN message records that something in the system was not as expected. It is not an error, i.e. it is not preventing correct operation of the system or any part of it, but it is still an indicator that something is wrong with the system that the operator should be aware of, and may wish to investigate. This level may be used for errors in user-supplied information.

INFO

INFO priority messages are intended to show what’s going on in the system, at a broad-brush level. INFO messages do not indicate that something’s amiss (use WARN or ERROR for that), and the system should be able to run at full speed in production with INFO level logging.

The following types of message are probably appropriate at INFO level:

<System component> successfully initialised

<Transaction type> transaction started, member: <member number>, amount: <amount>

<Transaction type> transaction completed, txNo: <transaction number>, member: <member number>, amount: <amount>, result: <result code>

DEBUG

DEBUG messages are intended to help isolate a problem in a running system, by showing the code that is executed, and the context information used during that execution. In many cases, it is that context information that is most important, so you should take pains to make the context as useful as possible. For example, the message ‘load_plugin() started’ says nothing about which plugin is being loaded or from which file, or anything else that might help us to relate this to an operation that failed.

In normal operation, a production system would not be expected to run at DEBUG level. However, if there is an occasional problem being experienced, DEBUG logging may be enabled for an extended period, so it’s important that the overhead of this is not too high (up to 25% is perhaps OK).

The following types of message are probably appropriate at DEBUG level:

Entering <class name>.<method name>, <argument name>: <argument value>, [<argument name>: <argument value>…]

Method <class name>.<method name> completed [, returning: <return value>]

<class name>.<method name>: <description of some action being taken, complete with context information>

<class name>.<method name>: <description of some calculated value, or decision made, complete with context information>

Please note that DEBUG messages are intended to used for debugging in production systems, so must be written for public consumption. In particular, please avoid any messages in a non-standard format, e.g.

DEBUG ++++++++++++ This is here cause company “Blah” sucks +++++++++++

If a DEBUG message is very expensive to generate, you can guard it with a logger.IS_DEBUG_ENABLED() if check. Just make sure that nothing that happens inside that if block is required for normal system operation. Only sendmail should require debug logging to work.

TRACE

TRACE messages are intended for establishing the flow of control of the system. Typically TRACE messages are generated upon entering and exiting functions or methods.

When to log an Exception?

Ideally, an exception should only be logged by the code that handles the exception. Code that merely translates the exception should do no logging.

Indices and tables