Evaluation

Inheritance diagram of hermespy.core.pymonte.evaluation.Evaluator, hermespy.core.pymonte.evaluation.Evaluation, hermespy.core.pymonte.evaluation.EvaluationTemplate, hermespy.core.pymonte.evaluation.EvaluationResult
class Evaluator(plot_scale='linear', tick_format=ValueType.LIN)[source]

Bases: ABC

Evaluation routine for investigated object states, extracting performance indicators of interest.

Evaluators represent the process of extracting arbitrary performance indicator samples \(X_m\) in the form of Artifact instances from investigated object states.

abstract evaluate()[source]

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\).

Returns: Artifact \(X_m\) resulting from the evaluation.

Return type:

Evaluation

abstract initialize_result(grid)[source]

Initialize the respective result object for this evaluator.

Parameters:

grid (Sequence[GridDimensionInfo]) – The parameter grid over which the simulation iterates.

Return type:

EvaluationResult

Returns: The initialized evaluation result.

abstract property abbreviation: str[source]

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

property plot_scale: str[source]

Scale of the scalar evaluation plot.

Refer to the Matplotlib documentation for a list of a accepted values.

Returns: The scale identifier string.

tick_format: ValueType[source]
abstract property title: str[source]

Long string representation of this evaluator.

Used as plot title.

class Evaluation[source]

Bases: Generic[VT], Visualizable[VT]

Evaluation of a single simulation sample.

Evaluations are generated by Evaluators during Evaluator.evaluate().

abstract artifact()[source]

Generate an artifact from this evaluation.

Returns: The evaluation artifact.

Return type:

Artifact

class EvaluationTemplate(evaluation)[source]

Bases: Generic[ET, VT], Evaluation[VT], ABC

Template class for simple evaluations containing a single object.

Parameters:

evaluation (TypeVar(ET, bound= object)) – The represented evaluation.

property evaluation: ET[source]

The represented evaluation.

class EvaluationResult(grid, evaluator=None, base_dimension_index=0)[source]

Bases: Generic[AT], Visualizable[PlotVisualization], ABC

Result of an evaluation routine iterating over a parameter grid.

Evaluation results are generated by Evaluator Instances as a final step within the evaluation routine.

Parameters:
  • grid (Sequence[GridDimensionInfo]) – Parameter grid over which the simulation generating this result iterated.

  • evaluator (Evaluator | None) – Evaluator that generated this result. If not specified, the result is considered to be generated by an unknown evaluator.

abstract add_artifact(coordinates, artifact, compute_confidence=True)[source]

Add an artifact to this evaluation result.

Parameters:
  • coordinates (tuple[int, ...]) – Coordinates of the grid section to which the artifact belongs.

  • artifact (TypeVar(AT, bound= Artifact)) – Artifact to be added.

  • compute_confidence (bool) – Whether to compute the confidence level of the evaluation result for the given section coordinates.

Return type:

bool

Returns:

Can the result be trusted? Always False if compute_confidence is set to False.

print(console=None)[source]

Print a readable version of this evaluation result.

Parameters:

console (Console | None) – Rich console to print in. If not provided, a new one will be generated.

Return type:

None

abstract runtime_estimates()[source]

Extract a runtime estimate for this evaluation result.

Returns: A numpy array containing the runtime estimates for each grid section. If no estimates are available, None is returned.

Return type:

None | ndarray

abstract to_array()[source]

Convert the evaluation result raw data to an array representation.

Used to store the results in arbitrary binary file formats after simulation execution.

Returns: The array result representation.

Return type:

ndarray

to_str(grid_coordinates)[source]

Convert the evaluation result at the specified grid coordinates to a string representation.

Parameters:

grid_coordinates (Sequence[int]) – Coordinates of the grid section to be converted.

Return type:

str

Returns: The string representation of the evaluation result.

property base_dimension_index: int[source]

Index of the base dimension used for plotting.

property evaluator: Evaluator | None[source]

Evaluator that generated this result.

property grid: Sequence[GridDimensionInfo][source]

Paramter grid over which the simulation iterated.

class ET[source]

Type of Monte Carlo evaluation.

alias of TypeVar(‘ET’, bound=object)