PyMonte#

PyMonte is a stand-alone core module of HermesPy, enabling efficient and flexible Monte Carlo simulations over arbitrary configuration parameter combinations. By wrapping the core of the Ray project, any object serializable by the pickle standard module can become a system model for a Monte Carlo style simulation campaign.

%%{init: {'theme': 'dark'}}%% flowchart LR subgraph gridsection[Grid Section] parameter_a(Parameter) parameter_b(Parameter) end object((Investigated Object)) evaluator_a{{Evaluator}} evaluator_b{{Evaluator}} evaluator_c{{Evaluator}} subgraph sample[Sample] artifact_a[(Artifact)] artifact_b[(Artifact)] artifact_c[(Artifact)] end parameter_a --> object parameter_b --> object object ---> evaluator_a ---> artifact_a object ---> evaluator_b ---> artifact_b object ---> evaluator_c ---> artifact_c

Monte Carlo simulations usually sweep over multiple combinations of multiple parameters settings, configuring the underlying system model and generating simulation samples from independent realizations of the model state. PyMonte refers to a single parameter combination as GridSection, with the set of all parameter combinations making up the simulation grid. Each settable property of the investigated object is treated as a potential simulation parameter within the grid, i.e. each settable property can be represented by an axis within the multidimensional simulation grid.

Evaluator instances extract performance indicators from each investigated object realization, referred to as Artifact. A set of artifacts drawn from the same investigated object realization make up a single MonteCarloSample. During the execution of PyMonte simulations between \(M_\mathrm{min}\) and \(M_\mathrm{max}\) are generated from investigated object realizations for each grid section. The sample generation for each grid section may be aborted prematurely if all evaluators have reached a configured confidence threshold Refer to Bayer et al.[1] for a detailed description of the implemented algorithm.

%%{init: {'theme': 'dark'}}%% flowchart LR controller{Simulation Controller} gridsection_a[Grid Section] gridsection_b[Grid Section] sample_a[Sample] sample_b[Sample] subgraph actor_a[Actor #1] object_a((Investigated Object)) end subgraph actor_b[Actor #N] object_b((Investigated Object)) end controller --> gridsection_a --> actor_a --> sample_a controller --> gridsection_b --> actor_b --> sample_b

The actual simulation workload distribution is visualized in the previous flowchart. Using Ray, PyMonte spawns a number of MonteCarloActor containers, with the number of actors depending on the available resources (i.e. number of CPU cores) detected. A central simulation controller schedules the workload by assigning GridSection indices as tasks to the actors, which return the resulting simulation Samples after the simulation iteration is completed.

class ActorRunResult(samples=None, message=None)[source]#

Bases: object

Result object returned by generating a single sample from a simulation runner.

Parameters:
  • samples (List[MonteCarloSample], optional) – Samples generated by the remote actor run.

  • message (str, optional) – Message return from the remote actor run.

message: Optional[str]#
samples: Optional[List[MonteCarloSample]]#
class Artifact[source]#

Bases: object

Result of an investigated object evaluation.

Generated by Evaluator instances operating on investigated object states. In other words, Evaluator.evaluate() is expected to return an instance derived of this base class.

Artifacts may, in general represent any sort of object. However, it is encouraged to provide a scalar floating-point representation for data visualization by implementing the to_scalar() method.

abstract to_scalar()[source]#

Scalar representation of this artifact’s content.

Used to evaluate premature stopping criteria for the underlying evaluation.

Return type:

Optional[float]

Returns:

Scalar floating-point representation. None if a conversion to scalar is impossible.

class ArtifactTemplate(artifact)[source]#

Bases: Generic[FAT], Artifact

Scalar numerical result of an investigated object evaluation.

Implements the common case of an Artifact representing a scalar numerical value.

Parameters:

artifact (AT) – Artifact value.

to_scalar()[source]#

Scalar representation of this artifact’s content.

Used to evaluate premature stopping criteria for the underlying evaluation.

Return type:

float

Returns:

Scalar floating-point representation. None if a conversion to scalar is impossible.

property artifact: FAT#

Evaluation artifact.

Provides direct access to the represented artifact.

Returns:

Copy of the artifact.

Return type:

AT

class Evaluation[source]#

Bases: Visualizable

Evaluation of a single simulation sample.

Evaluations are generated by Evaluators during Evaluator.evaluate().

abstract artifact()[source]#

Generate an artifact from this evaluation.

Returns: The evaluation artifact.

Return type:

Artifact

class EvaluationResult(grid, evaluator)[source]#

Bases: Visualizable, ABC

Result of an evaluation routine iterating over a parameter grid.

Evaluation results are generated by Evaluator Instances as a final step within the evaluation routine.

Parameters:

grid (Sequence[GridDimension]) – Parameter grid over which the simulation generating this result iterated.

print(console=None)[source]#

Print a readable version of this evaluation result.

Parameters:

console (Console | None) – Rich console to print in. If not provided, a new one will be generated.

Return type:

None

abstract to_array()[source]#

Convert the evaluation result raw data to an array representation.

Used to store the results in arbitrary binary file formats after simulation execution.

Return type:

ndarray

Returns:

The array result representation.

property evaluator: Evaluator#

Evaluator that generated this result.

property grid: Sequence[GridDimension]#

Paramter grid over which the simulation iterated.

class EvaluationTemplate(evaluation)[source]#

Bases: Generic[ET], Evaluation, ABC

evaluation: TypeVar(ET)#
class Evaluator[source]#

Bases: ABC

Evaluation routine for investigated object states, extracting performance indicators of interest.

Evaluators represent the process of extracting arbitrary performance indicator samples \(X_m\) in the form of Artifact instances from investigated object states. Once a MonteCarloActor has set its investigated object to a new random state, it calls the evaluate() routines of all configured evaluators, collecting the resulting respective Artifact instances.

For a given set of Artifact instances, evaluators are expected to report a confidence_level() which may result in a premature abortion of the sample collection routine for a single GridSection. By default, the routine suggested by Bayer et al.[1] is applied: Considering a tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) the confidence in the mean performance indicator

\[\bar{X}_M = \frac{1}{M} \sum_{m = 1}^{M} X_m\]

is considered sufficient if a threshold \(\delta \in \mathbb{R}_{++}\), defined as

\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right) \leq \delta\]

has been crossed. The number of collected actually collected artifacts per GridSection \(M \in [M_{\mathrm{min}}, M_{\mathrm{max}}]\) is between a minimum number of required samples \(M_{\mathrm{min}} \in \mathbb{R}_{+}\) and an upper limit of \(M_{\mathrm{max}} \in \mathbb{R}_{++}\).

abstract evaluate()[source]#

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\).

Returns: Artifact \(X_m\) resulting from the evaluation.

Return type:

Evaluation

abstract generate_result(grid, artifacts)[source]#

Generates an evaluation result from the artifacts collected over the whole simulation grid.

Parameters:
  • grid (Sequence[GridDimension]) – The Simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

Return type:

EvaluationResult

Returns:

The evaluation result.

abstract property abbreviation: str#

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

property confidence: float#

Confidence threshold required for premature simulation abortion.

The confidence threshold \(\delta \in [0, 1]\) is the upper bound to the confidence level

\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right)\]

at which the sample collection for a single GridSection may be prematurely aborted [1].

Raises:

ValueError – If confidence is lower than zero or greater than one.

property plot_scale: str#

Scale of the scalar evaluation plot.

Refer to the Matplotlib documentation for a list of a accepted values.

Returns:

The scale identifier string.

Return type:

str

tick_format: ValueType#
abstract property title: str#

Long string representation of this evaluator.

Used as plot title.

property tolerance: float#

Tolerance level required for premature simulation abortion.

The tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) is the upper bound to the interval

\[\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\|\]

by which the performance indicator estimation \(\bar{X}_M\) may diverge from the actual expected value \(\mathrm{E}\left[ X \right]\).

Returns:

Non-negative tolerance \(\mathrm{TOL}\).

Return type:

float

Raises:

ValueError – If tolerance is negative.

class GridDimension(considered_objects, dimension, sample_points, title=None, plot_scale=None, tick_format=None)[source]#

Bases: object

Single axis within the simulation grid.

A grid dimension represents a single simulation parameter that is to be varied during simulation runtime to observe its effects on the evaluated performance indicators. The values the represented parameter is configured to are SamplePoints.

Parameters:
  • considered_objects (Union[Any, Tuple[Any, ...]]) – The considered objects of this grid section.

  • dimension (str) – Path to the attribute.

  • sample_points (List[Any]) – Sections the grid is sampled at.

  • title (str, optional) – Title of this dimension. If not specified, the attribute string is assumed.

  • plot_scale (str, optional) – Scale of the axis within plots.

  • tick_format (ValueType, optional) – Format of the tick labels. Linear by default.

Raises:

ValueError – If the selected dimension does not exist within the considered_object.

configure_point(point_index)[source]#

Configure a specific sample point.

Parameters:

point_index (int) – Index of the sample point to configure.

Raises:

ValueError – For invalid indexes.

Return type:

None

property considered_objects: Tuple[Any, ...]#

Considered objects of this grid section.

property dimension: str#

Dimension property name.

property first_impact: str | None#

Index of the first impacted simulation pipeline stage.

Returns:

Pipeline stage index. None, if the stage is unknown.

property last_impact: str | None#

Index of the last impacted simulation pipeline stage.

Returns:

Pipeline stage index. None, if the stage is unknown.

property num_sample_points: int#

Number of dimension sample points.

Returns:

Number of sample points.

Return type:

int

property plot_scale: str#

Scale of the scalar evaluation plot.

Refer to the Matplotlib documentation for a list of a accepted values.

Returns:

The scale identifier string.

Return type:

str

property sample_points: List[SamplePoint]#

Points at which this grid dimension is sampled.

property tick_format: ValueType#

Axis tick format of the scalar evaluation plot.

Returns: The tick format.

property title: str#

Title of the dimension.

Returns:

The title string.

class GridSection(coordinates, num_evaluators)[source]#

Bases: object

Parameters:

coordinates (Tuple[int, ...]) – Section indices within the simulation grid.

add_samples(samples, evaluators)[source]#

Add a new sample to this grid section collection.

Parameters:
Raises:

ValueError – If the number of artifacts in sample does not match the initialization.

Return type:

None

confidence_status(evaluators)[source]#

Check if each evaluator has reached its required confidence thershold.

Conidence indicates that the simulation for the parameter combination this grid section represents may be aborted, i.e. no more samples are required.

Parameters:

evaluators (Sequence[Evaluator]) – Evaluators giving feedback about their cofidence status.

Return type:

bool

Returns: Confidence indicator.

property confidences: ndarray#

Confidence in the estimated evaluations.

Returns: Array indicating probabilities for each evaluator

property coordinates: Tuple[int, ...]#

Grid section coordinates within the simulation grid.

Returns:

Section coordinates.

Return type:

Tuple[int, …]

property num_samples: int#

Number of already generated samples within this section.

Returns:

Number of samples.

Return type:

int

property samples: List[MonteCarloSample]#

The collected evaluation samples within this grid section.

Returns: List of samples.

class MonteCarlo(investigated_object, num_samples, evaluators=None, min_num_samples=-1, num_actors=None, console=None, console_mode=ConsoleMode.INTERACTIVE, section_block_size=None, ray_address=None, cpus_per_actor=1, runtime_env=False, catch_exceptions=True, progress_log_interval=1.0)[source]#

Bases: Generic[MO]

Grid of parameters over which to iterate the simulation.

Parameters:
  • investigated_object (MO) – Object to be investigated during the simulation runtime.

  • num_samples (int) – Number of generated samples per grid element.

  • evaluators (Set[MonteCarloEvaluators[MO]]) – Evaluators used to process the investigated object sample state.

  • min_num_samples (int, optional) – Minimum number of generated samples per grid element.

  • num_actors (int, optional) – Number of dedicated actors spawned during simulation. By default, the number of actors will be the number of available CPU cores.

  • console (Console, optional) – Console the simulation writes to.

  • console_mode (ConsoleMode, optional) – The printing behaviour of the simulation during runtime.

  • section_block_size (int, optional) – Number of samples per section block. By default, the size of the simulation grid is selected.

  • ray_address (str, optional) – The address of the ray head node. If None is provided, the head node will be launched in this machine.

  • cpus_per_actor (int, optional) – Number of CPU cores reserved per actor. One by default.

  • runtime_env (bool, optional) – Create a virtual environment on each host. Disabled by default.

  • catch_exceptions (bool, optional) – Catch exceptions occuring during simulation runtime. Enabled by default.

  • progress_log_interval (float, optional) – Interval between result logs in seconds. 1 second by default.

add_dimension(dimension)[source]#

Add a new dimension to the simulation grid.

Parameters:

dimension (GridDimension) – Dimension to be added.

Raises:

ValueError – If the dimension already exists within the grid.

Return type:

None

add_evaluator(evaluator)[source]#

Add new evaluator to the Monte Carlo simulation.

Parameters:

evaluator (Evaluator[MO]) – The evaluator to be added.

Return type:

None

new_dimension(dimension, sample_points, *args, **kwargs)[source]#

Add a dimension to the simulation grid.

Must be a property of the investigated object.

Parameters:
  • dimension (str) – String representation of dimension location relative to the investigated object.

  • sample_points (List[Any]) – List points at which the dimension will be sampled into a grid. The type of points must be identical to the grid arguments / type.

  • *args (Tuple[Any], optional) – References to the object the dimension belongs to. Resolved to the investigated object by default, but may be an attribute or sub-attribute of the investigated object.

  • **kwargs – Additional initialization arguments passed to GridDimension.

Return type:

GridDimension

Returns:

The newly created dimension object.

remove_dimension(dimension)[source]#

Remove an existing dimension from the simulation grid.

Parameters:

dimension (GridDimension) – The dimension to be removed.

Raises:

ValueError – If the dimension does not exist.

Return type:

None

simulate(actor)[source]#

Launch the Monte Carlo simulation.

Parameters:

actor (Type[MonteCarloActor]) – The actor from which to generate the simulation samples.

Returns:

Generated samples.

Return type:

np.ndarray

catch_exceptions: bool#
property console: Console#

Console the Simulation writes to.

Returns:

Handle to the console.

Return type:

Console

property console_mode: ConsoleMode#

Console mode during simulation runtime.

Returms: The current console mode.

property cpus_per_actor: int#

Number of CPU cores reserved for each actor.

Returns:

Number of cores.

Raises:

ValueError – If the number of cores is smaller than one

property dimensions: List[GridDimension]#

Simulation grid dimensions which make up the grid.

property evaluators: List[Evaluator]#

Evaluators used to process the investigated object sample state.

property investigated_object: Any#

The object to be investigated during the simulation runtime.

property max_num_samples: int#

Maximum number of samples over the whole simulation.

Returns:

Number of samples.

Return type:

int

property min_num_samples: int#

Minimum number of samples per simulation grid element.

Returns:

Number of samples

Return type:

int

Raises:

ValueError – If number of samples is smaller than zero.

property num_actors: int#

Number of dedicated actors spawned during simulation runs.

Returns:

Number of actors.

Return type:

int

Raises:

ValueError – If the number of actors is smaller than zero.

property num_samples: int#

Number of samples per simulation grid element.

Returns:

Number of samples

Return type:

int

Raises:

ValueError – If number of samples is smaller than one.

runtime_env: bool#
property section_block_size: int#

Number of generated samples per section block.

Returns:

Number of samples per block.

Return type:

int

Raises:

ValueError – If the block size is smaller than one.

class MonteCarloActor(argument_tuple, index, catch_exceptions=True)[source]#

Bases: Generic[MO]

Monte Carlo Simulation Actor.

Actors are essentially workers running in a private process executing simulation tasks. The result of each individual simulation task is a simulation sample.

Parameters:
  • argument_tuple (Tuple[TypeVar(MO), Sequence[GridDimension], Sequence[Evaluator]]) – Object to be investigated during the simulation runtime. Dimensions over which the simulation will iterate. Evaluators used to process the investigated object sample state.

  • index (int) – Global index of the actor.

  • catch_exceptions (bool, optional) – Catch exceptions during run. Enabled by default.

run(program, *, _ray_trace_ctx=None)[source]#

Run the simulation actor.

Parameters:

program (List[Tuple[int, ...]]) – A list of simulation grid section indices for which to collect samples.

Return type:

ActorRunResult

Returns:

A list of generated MonteCarloSample`s. Contains the same number of entries as `program.

abstract stage_executors()[source]#

List of simulation stage execution callbacks.

Simulations stages will be executed in the order specified here.

Return type:

List[Callable]

Returns:

List of function callbacks for simulation stage execution routines.

abstract static stage_identifiers()[source]#

List of simulation stage identifiers.

Simulations stages will be executed in the order specified here.

Return type:

List[str]

Returns:

List of function identifiers for simulation stage execution routines.

catch_exceptions: bool#
class MonteCarloResult(grid, evaluators, sample_grid, performance_time)[source]#

Bases: object

Result of a Monte Carlo simulation.

Parameters:
  • grid (Sequence[GridDimension]) – Dimensions over which the simulation has swept.

  • evaluators (Sequence[Evaluator]) – Evaluators used to evaluated the simulation artifacts.

  • sample_grid (SampleGrid) – Grid containing evaluation artifacts collected over the grid iterations.

  • performance_time (float) – Time required to compute the simulation.

Raises:

ValueError – If the dimensions of samples do not match the supplied sweeping dimensions and evaluators.

plot()[source]#

Plot evaluation figures for all contained evaluator artifacts.

Return type:

List[FigureBase]

Returns:

List of handles to all created Matplotlib figures.

print(console=None)[source]#

Print a text representation of the simulation result.

Parameters:

console (Console, optional) – Rich console to print to. If not provided, a new one will be initialized.

Return type:

None

save_to_matlab(file)[source]#

Save simulation results to a matlab file.

Parameters:

file (str) – File location to which the results should be saved.

Return type:

None

property evaluation_results: Sequence[EvaluationResult]#

Access individual evaluation results.

Returns: List of evaluation results.

property performance_time: float#

Simulation runtime.

Returns:

Simulation runtime in seconds.

class MonteCarloSample(grid_section, sample_index, artifacts)[source]#

Bases: object

Single sample of a Monte Carlo simulation.

Parameters:
  • grid_section (Tuple[int, ...]) – Grid section from which the sample was generated.

  • sample_index (int) – Index of the sample. In other words this object represents the sample_index`th sample of the selected `grid_section.

  • artifacts (Sequence[Artifact]) – Artifacts of evaluation

property artifact_scalars: ndarray#

Collect scalar artifact representations.

Returns:

Vector of scalar artifact representations.

Return type:

np.ndarray

property artifacts: Sequence[Artifact]#

Artifacts resulting from the sample’s evaluations.

Returns:

List of artifacts.

Return type:

List[Artifact]

property grid_section: Tuple[int, ...]#

Grid section from which this sample was generated.

Returns:

Tuple of grid section indices.

Return type:

Tuple[int, …]

property num_artifacts: int#

Number of contained artifact objects.

Returns:

Number of artifacts.

Return type:

int

property sample_index: int#

Index of the sample this object represents.

Returns:

Sample index.

Return type:

int

class RegisteredDimension(_property, first_impact=None, last_impact=None, title=None)[source]#

Bases: property

Register a class property getter as a PyMonte simulation dimension.

Registered properties may specify their simulation stage impacts and therefore significantly increase simulation runtime in cases where computationally demanding section re-calculations can be reduced.

Parameters:
  • first_impact (str, optional) – Name of the first simulation stage within the simulation pipeline which is impacted by manipulating this property. If not specified, the initial stage is assumed.

  • last_impact (str, optional) – Name of the last simulation stage within the simulation pipeline which is impacted by manipulating this property. If not specified, the final stage is assumed.

  • title (str, optional) – Displayed title of the dimension. If not specified, the dimension’s name will be assumed.

deleter(fdel)[source]#

Descriptor to obtain a copy of the property with a different deleter.

Return type:

RegisteredDimension

getter(fget)[source]#

Descriptor to obtain a copy of the property with a different getter.

Return type:

RegisteredDimension

classmethod is_registered(object)[source]#

Check if any object is a registered PyMonte simulation dimension.

Parameters:

object (Any) – The object in question.

Return type:

bool

Returns:

A boolean indicator.

setter(fset)[source]#

Descriptor to obtain a copy of the property with a different setter.

Return type:

RegisteredDimension

property first_impact: str | None#

First impact of the dimension within the simulation pipeline.

property last_impact: str | None#

Last impact of the dimension within the simulation pipeline.

property title: str | None#

Displayed title of the dimension.

class SampleGrid(grid_configuration, evaluators)[source]#

Bases: object

Grid of simulation samples.

Parameters:
  • grid_configuration (List[GridDimension]) – The simulation grid configuration.

  • evaluators (Sequence[Evaluator]) – The evaluators generating the artifacts.

class SamplePoint(value, title=None)[source]#

Bases: object

Sample point of a single grid dimension.

A single GridDimension holds a sequence of sample points accesible by the sample_points property. During simulation runtime, the simulation will dynamically reconfigure the scenario selecting a single sample point out of each GridDimension per generated simulation sample.

Parameters:
  • value (Any) – Sample point value with which to configure the grid dimension.

  • title (str, optional) – String representation of this sample point. If not specified, the simulation will attempt to infer an adequate representation.

property title: str#

String representation of this sample point

property value: Any#

Sample point value with which to configure the grid dimension

class ScalarEvaluationResult(grid, scalar_results, evaluator, plot_surface=True)[source]#

Bases: EvaluationResult

Base class for scalar evaluation results.

Parameters:
  • grid (Sequence[GridDimension]) – Simulation grid.

  • scalar_results (np.ndarray) – Scalar results generated from collecting samples over the simulation grid.

  • evaluator (Evaluator) – The evaluator generating the results.

  • plot_surface (bool, optional) – Enable surface plotting for two-dimensional grids. Enabled by default.

classmethod From_Artifacts(grid, artifacts, evaluator, plot_surface=True)[source]#

Generate a scalar evaluation result from a set of artifacts.

Parameters:
  • grid (Sequence[GridDimension]) – The simulation grid.

  • artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.

  • evaluator (Evaluator) – The evaluator generating the artifacts.

  • plot_surface (bool) – Whether to plot the result as a surface plot.

Return type:

TypeVar(SERT, bound= ScalarEvaluationResult)

Returns:

The scalar evaluation result.

to_array()[source]#

Convert the evaluation result raw data to an array representation.

Used to store the results in arbitrary binary file formats after simulation execution.

Return type:

ndarray

Returns:

The array result representation.

plot_surface: bool#
property title: str#

Title of the visualizable.

register(*args, **kwargs)[source]#

Shorthand to register a property as a MonteCarlo dimension.

Parameters:

_property (property) – The property to be registered.

Return type:

Callable[[Any], RegisteredDimension]