Hardware Loop#

class EvaluatorPlotMode(value)#

Bases: SerializableEnum

Evalution plot mode during hardware loop runtime.

HIDE = 0#

Do not plot evaluation results during hardware loop runtime.

EVALUATION = 1#

Plot the evaluation during hardware loop runtime.

ARTIFACTS = 2#

Plot the series of generated scalar artifacts during hardware loop runtime.

class EvaluatorRegistration(evaluator, plot_mode)#

Bases: Evaluator

Evaluator registration for the hardware loop.

Created by the HardwareLoop.add_evaluator() method.

Parameters:
property evaluator: Evaluator#

Registered evaluator.

property plot_mode: EvaluatorPlotMode#

Plot mode of the registered evaluator.

evaluate()#

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\). :returns: Artifact \(X_m\) resulting from the evaluation. :rtype: Artifact

property abbreviation: str#

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

Returns:

String representation

Return type:

str

property title: str#

Long string representation of this evaluator.

Used as plot title.

Returns:

String representation

Return type:

str

property confidence: float#

Confidence threshold required for premature simulation abortion.

The confidence threshold \(\delta \in [0, 1]\) is the upper bound to the confidence level

\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right)\]

at which the sample collection for a single GridSection may be prematurely aborted.

Returns:

Confidence \(\delta\) between zero and one.

Return type:

float

Raises:

ValueError – If confidence is lower than zero or greater than one.

property tolerance: float#

Tolerance level required for premature simulation abortion.

The tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) is the upper bound to the interval

\[\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\|\]

by which the performance indicator estimation \(\bar{X}_M\) may diverge from the actual expected value \(\mathrm{E}\left[ X \right]\).

Returns:

Non-negative tolerance \(\mathrm{TOL}\).

Return type:

float

Raises:

ValueError – If tolerance is negative.

generate_result(grid, artifacts)#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

EvaluationResult

Returns:

The evaluation result.

class HardwareLoopSample(drop, evaluations, artifacts)#

Bases: object

Sample of the hardware loop.

Generated during HardwareLoop.run().

property drop: Drop#

Drop of the hardware loop sample.

property evaluations: Sequence[Evaluation]#

Evaluations of the hardware loop sample.

property artifacts: Sequence[Artifact]#

Artifacts of the hardware loop sample.

class HardwareLoopPlot(title='')#

Bases: ABC

property hardware_loop: HardwareLoop | None#

Hardware loop this plot is attached to.

property title: str#

Title of the hardware loop plot.

property figure: Figure#

Figure of the hardware loop plot.

property axes: Axes#

Axes of the hardware loop plot.

prepare_figure()#

Prepare the figure for the hardware loop plot.

Returns:

Figure and axes of the hardware loop plot.

Return type:

Tuple[plt.Figure, plt.Axes]

update_plot(sample)#

Update the hardware loop plot.

Internally calls the abstract _update_plot() method.

Parameters:

sample (HardwareLoopSample) – Hardware loop sample to be plotted.

Raises:

RuntimeError – If the hardware loop is not set.

Return type:

None

class HardwareLoop(scenario, manual_triggering=False, plot_information=True, **kwargs)#

Bases: Serializable, Generic[PhysicalScenarioType, PDT], Pipeline[PhysicalScenarioType, PDT]

Hermespy hardware loop configuration.

Parameters:
  • scenario (PhysicalScenarioType) – The physical scenario being controlled by the hardware loop.

  • manual_triggering (bool, optional) – Require a keyboard user input to trigger each drop manually. Disabled by default.

  • plot_information (bool, optional) – Plot information during loop runtime. Enabled by default.

yaml_tag: Optional[str] = 'HardwareLoop'#

YAML serialization tag

property_blacklist: Set[str] = {'console'}#

Set of properties to be ignored during serialization.

serialized_attributes: Set[str] = {'manual_triggering', 'plot_information', 'scenario'}#

Set of object attributes to be serialized.

manual_triggering: bool#

Require a user input to trigger each drop manually

plot_information: bool#

Plot information during loop runtime

new_dimension(dimension, sample_points, *args)#

Add a dimension to the sweep grid.

Must be a property of the HardwareLoop.scenario().

Parameters:
  • dimension (str) – String representation of dimension location relative to the investigated object.

  • sample_points (List[Any]) – List points at which the dimension will be sampled into a grid. The type of points must be identical to the grid arguments / type.

  • *args (Tuple[Any], optional) – References to the object the imension belongs to. Resolved to the investigated object by default, but may be an attribute or sub-attribute of the investigated object.

Return type:

GridDimension

Returns: The newly created dimension object.

add_dimension(dimension)#

Add a new dimension to the simulation grid.

Parameters:

dimension (GridDimension) – Dimension to be added.

Raises:

ValueError – If the dimension already exists within the grid.

Return type:

None

add_evaluator(evaluator)#

Add new evaluator to the hardware loop.

Parameters:

evaluator (Evaluator) – The evaluator to be added.

Return type:

None

add_plot(plot)#

Add a new plot to be visualized by the hardware loop during runtime.

Parameters:

plot (HardwareLoopPlot) – The plot to be added.

Return type:

None

property evaluators: List[Evaluator]#

List of registered evaluators.

property num_evaluators: int#

Number of registered evaluators.

Returns: The number of evaluators.

evaluator_index(evaluator)#

Index of the given evaluator.

Parameters:

evaluator (Evaluator) – The evaluator to be searched for.

Return type:

int

Returns: The index of the evaluator.

run(overwrite=True, campaign='default')#

Run the hardware loop configuration.

Parameters:
  • overwrite (bool, optional) – Allow the replacement of an already existing savefile.

  • campaing (str, optional) – Name of the measurement campaign.

Return type:

None

replay(file_location)#

Replay a stored pipeline run.

Parameters:

file_location (str) – File system location of the replay.

Return type:

None

classmethod to_yaml(representer, node)#

Serialize a serializable object to YAML.

Parameters:
  • representer (SafeRepresenter) – A handle to a representer used to generate valid YAML code. The representer gets passed down the serialization tree to each node.

  • node (Serializable) – The channel instance to be serialized.

Return type:

MappingNode

Returns: The serialized YAML node.

classmethod from_yaml(constructor, node)#

Recall a new serializable class instance from YAML.

Parameters:
  • constructor (SafeConstructor) – A handle to the constructor extracting the YAML information.

  • node (Node) – YAML node representing the Serializable serialization.

Return type:

HardwareLoop

Returns: The de-serialized object.