Radar Evaluation#

This module introduces several evaluators for performance indicators in radar detection. Refer to the PyMonte documentation for a detailed introduction to the concept of Evaluators.

classDiagram ArtifactTemplate <|-- BitErrorArtifact ArtifactTemplate <|-- BlockErrorArtifact ArtifactTemplate <|-- FrameErrorArtifact ArtifactTemplate <|-- ThroughputArtifact CommunicationEvaluator <|-- BitErrorEvaluator CommunicationEvaluator <|-- BlockErrorEvaluator CommunicationEvaluator <|-- FrameErrorEvaluator CommunicationEvaluator <|-- ThroughputEvaluator EvaluationTemplate <|-- BitErrorEvaluation EvaluationTemplate <|-- BlockErrorEvaluation EvaluationTemplate <|-- FrameErrorEvaluation EvaluationTemplate <|-- ThroughputEvaluation Evaluator <|-- CommunicationEvaluator Serializable <|-- BitErrorEvaluator Serializable <|-- BlockErrorEvaluator Serializable <|-- FrameErrorEvaluator Serializable <|-- ThroughputEvaluator

The implemented RadarEvaluator all inherit from the identically named common base which gets initialized by selecting one Modem, whose performance will be evaluated and one RadarChannel instance, containing the ground truth. The currently considered performance indicators are

Evaluator

Artifact

Performance Indicator

DetectionProbEvaluator

DetectionProbArtifact

Probability of detecting the target at the right bin

ReceiverOperatingCharacteristic

RocArtifact

Probability of detection versus probability of false alarm

:class`.RootMeanSquareError`

RootMeanSquareArtifact

Root mean square error of point detections

Configuring RadarEvaluators to evaluate the radar detection of Modem instances is rather straightforward:

# Create two separate modem instances
modem = Modem()
channel = RadarChannel()

# Create a radar evaluation as an evaluation example
radar_evaluator = DetectionProbEvaluator(modem, channel)

# Extract evaluation
radar_evaluation = radar_evaluator.evaluate()

# Visualize evaluation
radar_evaluation.plot()
class RadarEvaluator(receiving_radar, radar_channel=None)#

Bases: Evaluator, ABC

Base class for evaluating sensing performance.

Parameters:
  • receiving_radar (Radar) – nRadar under test.

  • radar_channel (RadarChannelBase) – Radar channel modeling a desired target.

Raises:

ValueError – If the receiving radar is not an operator of the radar_channel receiver.

property receiving_radar: Radar#

Radar detector with target present.

Returns:

Handle to the receiving radar, when target is present.

Return type:

Modem

property radar_channel: RadarChannelBase#

The considered radar channel.

generate_result(grid, artifacts)#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

EvaluationResult

Returns:

The evaluation result.

class DetectionProbArtifact(artifact)#

Bases: ArtifactTemplate[bool]

Artifact of a detection probability evaluation for a radar detector.

Parameters:

artifact (AT) – Artifact value.

class DetectionProbabilityEvaluation(evaluation)#

Bases: EvaluationTemplate[bool]

artifact()#

Generate an artifact from this evaluation.

Returns: The evaluation artifact.

Return type:

DetectionProbArtifact

class DetectionProbEvaluator(receiving_radar)#

Bases: RadarEvaluator, Serializable

Evaluate detection probability at a radar detector, considering any bin, i.e., detection is considered if any bin in the radar cube is above the threshold

Parameters:

receiving_radar (Radar) – Radar detector

yaml_tag: Optional[str] = 'DetectionProbEvaluator'#

YAML serialization tag

property abbreviation: str#

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

Returns:

String representation

Return type:

str

property title: str#

Long string representation of this evaluator.

Used as plot title.

Returns:

String representation

Return type:

str

generate_result(grid, artifacts)#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

ScalarEvaluationResult

Returns:

The evaluation result.

evaluate()#

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\). :returns: Artifact \(X_m\) resulting from the evaluation. :rtype: Artifact

class RocArtifact(h0_value, h1_value)#

Bases: Artifact

Artifact of receiver operating characteristics (ROC) evaluation

Parameters:
  • h0_value (float) – Measured value for null-hypothesis (H0), i.e., noise only

  • h1_value (float) – Measured value for alternative hypothesis (H1)

property h0_value: float#
property h1_value: float#
class RocEvaluation(cube_h0, cube_h1)#

Bases: Evaluation

Evaluation of receiver operating characteristics (ROC)

data_h0: ndarray#
data_h1: ndarray#
artifact()#

Generate an artifact from this evaluation.

Returns: The evaluation artifact.

Return type:

RocArtifact

class RocEvaluationResult(grid, detection_probabilities, false_alarm_probabilities, title='Receiver Operating Characteristics')#

Bases: EvaluationResult

Final result of an receive operating characteristcs evaluation.

Parameters:
  • grid (Sequence[GridDimension]) – Grid dimensions of the evaluation result.

  • detection_probabilities (np.ndarray) – Detection probabilities for each grid point.

  • false_alarm_probabilities (np.ndarray) – False alarm probabilities for each grid point.

  • title (str, optional) – Title of the evaluation result.

property title: str#

Title of the visualizable.

Returns: Title string.

to_array()#

Convert the evaluation result raw data to an array representation.

Used to store the results in arbitrary binary file formats after simulation execution.

Return type:

ndarray

Returns:

The array result representation.

class ReceiverOperatingCharacteristic(radar, radar_channel=None, num_thresholds=101)#

Bases: RadarEvaluator, Serializable

Evaluate the receiver operating characteristics for a radar operator.

Parameters:
  • radar (Radar) – Radar under test.

  • radar_channel (RadarChannelBase, optional) – Radar channel containing a desired target. If the radar channel is not specified, the evaluate() routine will not be available.

  • num_thresholds (int, optional) – Number of different thresholds to be considered in ROC curve

yaml_tag: Optional[str] = 'ROC'#

YAML serialization tag.

evaluate()#

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\). :returns: Artifact \(X_m\) resulting from the evaluation. :rtype: Artifact

property abbreviation: str#

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

Returns:

String representation

Return type:

str

property title: str#

Long string representation of this evaluator.

Used as plot title.

Returns:

String representation

Return type:

str

generate_result(grid, artifacts)#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

RocEvaluationResult

Returns:

The evaluation result.

classmethod From_Scenarios(h0_scenario, h1_scenario, h0_operator=None, h1_operator=None)#
Return type:

RocEvaluationResult

classmethod From_HDF(file, h0_campaign='h0_measurements', h1_campaign='h1_measurements')#

Compute an ROC evaluation result from a savefile.

Parameters:
  • file (Union[str, File]) – Savefile containing the measurements. Either as file system location or h5py File handle.

  • h0_campaign (str, optional) – Campaign identifier of the h0 hypothesis measurements. By default, h0_measurements is assumed.

  • h1_campaign (str, optional) – Campaign identifier of the h1 hypothesis measurements. By default, h1_measurements is assumed.

Return type:

RocEvaluationResult

Returns: The ROC evaluation result.

class RootMeanSquareArtifact(num_errors, cummulation)#

Bases: Artifact

Artifact of a root mean square evaluation

Parameters:
  • num_errors (int) – Number of errros.

  • cummulation (float) – Sum of squared errors distances.

to_scalar()#

Scalar representation of this artifact’s content.

Used to evaluate premature stopping criteria for the underlying evaluation.

Returns:

Scalar floating-point representation. None if a conversion to scalar is impossible.

Return type:

Optional[float]

property num_errors: int#

Number of cummulated errors

property cummulation: float#

Cummulated squared error

class RootMeanSquareEvaluation(pcl, ground_truth)#

Bases: Evaluation

Result of a single root mean squre evaluation.

artifact()#

Generate an artifact from this evaluation.

Returns: The evaluation artifact.

Return type:

RootMeanSquareArtifact

class RootMeanSquareErrorResult(grid, scalar_results, evaluator, plot_surface=True)#

Bases: ScalarEvaluationResult

Result of a root mean square error evaluation.

Parameters:
  • grid (Sequence[GridDimension]) – Simulation grid.

  • scalar_results (np.ndarray) – Scalar results generated from collecting samples over the simulation grid.

  • evaluator (Evaluator) – The evaluator generating the results.

  • plot_surface (bool, optional) – Enable surface plotting for two-dimensional grids. Enabled by default.

class RootMeanSquareError(receiving_radar, radar_channel=None)#

Bases: RadarEvaluator

Root mean square estimation error of point detections.

Parameters:
  • receiving_radar (Radar) – nRadar under test.

  • radar_channel (RadarChannelBase) – Radar channel modeling a desired target.

Raises:

ValueError – If the receiving radar is not an operator of the radar_channel receiver.

evaluate()#

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\). :returns: Artifact \(X_m\) resulting from the evaluation. :rtype: Artifact

property title: str#

Long string representation of this evaluator.

Used as plot title.

Returns:

String representation

Return type:

str

property abbreviation: str#

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

Returns:

String representation

Return type:

str

generate_result(grid, artifacts)#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

RootMeanSquareErrorResult

Returns:

The evaluation result.