PyMonte

PyMonte is a stand-alone core module of HermesPy, enabling efficient and flexible Monte Carlo simulations over arbitrary configuration parameter combinations. By wrapping the core of the Ray project, any object serializable by the pickle standard module can become a system model for a Monte Carlo style simulation campaign.

%%{init: {'theme': 'dark'}}%% flowchart LR subgraph gridsection[Grid Section] parameter_a(Parameter) parameter_b(Parameter) end object((Investigated Object)) evaluator_a{{Evaluator}} evaluator_b{{Evaluator}} evaluator_c{{Evaluator}} subgraph sample[Sample] artifact_a[(Artifact)] artifact_b[(Artifact)] artifact_c[(Artifact)] end parameter_a --> object parameter_b --> object object ---> evaluator_a ---> artifact_a object ---> evaluator_b ---> artifact_b object ---> evaluator_c ---> artifact_c

Monte Carlo simulations usually sweep over multiple combinations of multiple parameters settings, configuring the underlying system model and generating simulation samples from independent realizations of the model state. PyMonte refers to a single parameter combination as GridSection, with the set of all parameter combinations making up the simulation grid. Each settable property of the investigated object is treated as a potential simulation parameter within the grid, i.e. each settable property can be represented by an axis within the multidimensional simulation grid.

Evaluator instances extract performance indicators from each investigated object realization, referred to as Artifact. A set of artifacts drawn from the same investigated object realization make up a single MonteCarloSample. During the execution of PyMonte simulations between \(M_\mathrm{min}\) and \(M_\mathrm{max}\) are generated from investigated object realizations for each grid section. The sample generation for each grid section may be aborted prematurely if all evaluators have reached a configured confidence threshold Refer to Bayer et al.1 for a detailed description of the implemented algorithm.

%%{init: {'theme': 'dark'}}%% flowchart LR controller{Simulation Controller} gridsection_a[Grid Section] gridsection_b[Grid Section] sample_a[Sample] sample_b[Sample] subgraph actor_a[Actor #1] object_a((Investigated Object)) end subgraph actor_b[Actor #N] object_b((Investigated Object)) end controller --> gridsection_a --> actor_a --> sample_a controller --> gridsection_b --> actor_b --> sample_b

The actual simulation workload distribution is visualized in the previous flowchart. Using Ray, PyMonte spawns a number of MonteCarloActor containers, with the number of actors depending on the available resources (i.e. number of CPU cores) detected. A central simulation controller schedules the workload by assigning GridSection indices as tasks to the actors, which return the resulting simulation Samples after the simulation iteration is completed.

class Artifact

Bases: object

Result of an investigated object evaluation.

Generated by Evaluator instances operating on investigated object states. In other words, Evaluator.evaluate() is expected to return an instance derived of this base class.

Artifacts may, in general represent any sort of object. However, it is encouraged to provide a scalar floating-point representation for data visualization by implementing the to_scalar() method.

abstract to_scalar()

Scalar representation of this artifact’s content.

Used to evaluate premature stopping criteria for the underlying evaluation.

Returns

Scalar floating-point representation. None if a conversion to scalar is impossible.

Return type

Optional[float]

class ArtifactTemplate(artifact)

Bases: Generic[hermespy.core.monte_carlo.AT], hermespy.core.monte_carlo.Artifact

Scalar numerical result of an investigated object evaluation.

Implements the common case of an Artifact representing a scalar numerical value.

Parameters

artifact (AT) – Artifact value.

property artifact: hermespy.core.monte_carlo.AT

Evaluation artifact.

Provides direct access to the represented artifact.

Returns

Copy of the artifact.

Return type

AT

to_scalar()

Scalar representation of this artifact’s content.

Used to evaluate premature stopping criteria for the underlying evaluation.

Returns

Scalar floating-point representation. None if a conversion to scalar is impossible.

Return type

Optional[float]

class Evaluator

Bases: Generic[hermespy.core.monte_carlo.MO]

Evaluation routine for investigated object states, extracting performance indicators of interest.

Evaluators represent the process of extracting arbitrary performance indicator samples \(X_m\) in the form of Artifact instances from investigated object states. Once a MonteCarloActor has set its investigated object to a new random state, it calls the evaluate() routines of all configured evaluators, collecting the resulting respective Artifact instances.

For a given set of Artifact instances, evaluators are expected to report a confidence_level() which may result in a premature abortion of the sample collection routine for a single GridSection. By default, the routine suggested by Bayer et al.1 is applied: Considering a tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) the confidence in the mean performance indicator

\[\bar{X}_M = \frac{1}{M} \sum_{m = 1}^{M} X_m\]

is considered sufficient if a threshold \(\delta \in \mathbb{R}_{++}\), defined as

\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right) \leq \delta\]

has been crossed. The number of collected actually collected artifacts per GridSection \(M \in [M_{\mathrm{min}}, M_{\mathrm{max}}]\) is between a minimum number of required samples \(M_{\mathrm{min}} \in \mathbb{R}_{+}\) and an upper limit of \(M_{\mathrm{max}} \in \mathbb{R}_{++}\).

abstract evaluate(investigated_object)

Evaluate the state of an investigated object.

Implements the process of extracting an arbitrary performance indicator, represented by the returned Artifact \(X_m\).

Parameters

investigated_object (MO) – Investigated object, configured to an independent random state.

Returns

Artifact \(X_m\) resulting from the evaluation.

Return type

Artifact

abstract property abbreviation: str

Short string representation of this evaluator.

Used as a label for console output and plot axes annotations.

Returns

String representation

Return type

str

abstract property title: str

Long string representation of this evaluator.

Used as plot title.

Returns

String representation

Return type

str

property confidence: float

Confidence threshold required for premature simulation abortion.

The confidence threshold \(\delta \in [0, 1]\) is the upper bound to the confidence level

\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right)\]

at which the sample collection for a single GridSection may be prematurely aborted.

Returns

Confidence \(\delta\) between zero and one.

Return type

float

Raises

ValueError – If confidence is lower than zero or greater than one.

property tolerance: float

Tolerance level required for premature simulation abortion.

The tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) is the upper bound to the interval

\[\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\|\]

by which the performance indicator estimation \(\bar{X}_M\) may diverge from the actual expected value \(\mathrm{E}\left[ X \right]\).

Returns

Non-negative tolerance \(\mathrm{TOL}\).

Return type

float

Raises

ValueError – If tolerance is negative.

property plot_scale: str

Scale of the scalar evaluation plot.

Refer to the Matplotlib documentation for a list of a accepted values.

Returns

The scale identifier string.

Return type

str

confidence_level(scalars)

Compute the confidence level in a given set of scalars.

Refer to Bayer et al.1 for a detailed derivation of the implement equations.

Parameters

scalars (np.ndarray) – Numpy vector of scalar representations.

Raises

ValueError – If scalars is not a vector.

Return type

float

class MonteCarloSample(grid_section, sample_index, artifacts)

Bases: object

Single sample of a Monte Carlo simulation.

Parameters
  • grid_section (Tuple[int, ...]) – Grid section from which the sample was generated.

  • sample_index (int) – Index of the sample. In other words this object represents the sample_index`th sample of the selected `grid_section.

  • artifacts (List[Artifact]) – Artifacts of evaluation

property grid_section: Tuple[int, ...]

Grid section from which this sample was generated.

Returns

Tuple of grid section indices.

Return type

Tuple[int, …]

property sample_index: int

Index of the sample this object represents.

Returns

Sample index.

Return type

int

property artifacts: List[hermespy.core.monte_carlo.Artifact]

Artifacts resulting from the sample’s evaluations.

Returns

List of artifacts.

Return type

List[Artifact]

property num_artifacts: int

Number of contained artifact objects.

Returns

Number of artifacts.

Return type

int

property artifact_scalars: numpy.ndarray

Collect scalar artifact representations.

Returns

Vector of scalar artifact representations.

Return type

np.ndarray

class GridSection(coordinates, evaluators)

Bases: object

Parameters
  • coordinates (Tuple[int, ...]) – Section indices within the simulation grid.

  • evaluators (List[Evaluator]) – Configured evaluators.

property coordinates: Tuple[int, ...]

Grid section coordinates within the simulation grid.

Returns

Section coordinates.

Return type

Tuple[int, …]

property num_samples: int

Number of already generated samples within this section.

Returns

Number of samples.

Return type

int

property num_evaluators: int

Number of configured evaluators.

Returns

Number of evaluators.

Return type

int

add_samples(samples)

Add a new sample to this grid section collection.

Parameters

samples (Union[MonteCarloSample, List[MonteCarloSample])) – Samples to be added to this section.

Raises

ValueError – If the number of artifacts in sample does not match the initialization.

Return type

None

property confidences: numpy.ndarray

Confidence in the scalar evaluations.

Returns

Boolean array indicating confidence.

Return type

np.ndarray

property scalars: numpy.ndarray

Access the scalar evaluator representations in this grid section.

Returns

Matrix of scalar representations. First dimension indicates the evaluator index, second dimension the sample.

Return type

np.ndarray

class MonteCarloActor(argument_tuple, section_block_size=10)

Bases: Generic[hermespy.core.monte_carlo.MO]

Monte Carlo Simulation Actor.

Actors are essentially workers running in a private process executing simulation tasks. The result of each individual simulation task is a simulation sample.

Parameters
  • argument_tuple (Tuple[TypeVar(MO), List[GridDimension], List[Evaluator[TypeVar(MO)]]]) – Object to be investigated during the simulation runtime. Dimensions over which the simulation will iterate. Evaluators used to process the investigated object sample state.

  • section_block_size (int) – Number of samples generated per section block.

run(grid_section, *, _ray_trace_ctx=None)

Run the simulation actor.

Parameters

grid_section (Tuple[int, ...]) – Sample point index of each grid dimension.

Returns

The generated sample object.

Return type

MonteCarloSample

abstract sample()

Generate a sample of the investigated object.

Returns

The resulting sample.

Return type

MO

class MonteCarloResult(grid, evaluators, sections, performance_time)

Bases: Generic[hermespy.core.monte_carlo.MO]

Parameters
  • grid (List[GridDimension]) – Dimensions over which the simulation has swept.

  • evaluators (List[Evaluator]) – Evaluators used to evaluated the simulation artifacts.

  • sections (np.ndarray) – Evaluation results.

  • performance_time (float) – Time required to compute the simulation.

Raises

ValueError – If the dimensions of samples do not match the supplied sweeping dimensions and evaluators.

plot()

Plot evaluation figures for all contained evaluator artifacts.

Returns

List of handles to all created Matplotlib figures.

Return type

List[plt.Figure]

save_to_matlab(file)

Save simulation results to a matlab file.

Parameters

file (str) – File location to which the results should be saved.

Return type

None

class GridDimension(considered_object, dimension, sample_points, title=None, plot_scale=None)

Bases: object

Single axis within the simulation grid.

Parameters
  • considered_object (Any) – The considered object of this grid section.

  • dimension (str) – Path to the attribute.

  • sample_points (List[Any]) – Sections the grid is sampled at.

  • title (str, optional) – Title of this dimension. If not specified, the attribute string is assumed.

  • plot_scale (str, optional) – Scale of the axis within plots.

Raises

ValueError – If the selected dimension does not exist within the considered_object.

property considered_object: Any

Considered object of this grid section.

Return type

Any

property sample_points: List[Any]

Points at which this grid dimension is sampled.

Returns

List of sample points.

Return type

List[Any]

property num_sample_points: int

Number of dimension sample points.

Returns

Number of sample points.

Return type

int

configure_point(point_index)

Configure a specific sample point.

Parameters

point_index (int) – Index of the sample point to configure.

Raises

ValueError – For invalid indexes.

Return type

None

property title: str

Title of the dimension.

Return type

str

Returns

The title string.

property plot_scale: str

Scale of the scalar evaluation plot.

Refer to the Matplotlib documentation for a list of a accepted values.

Returns

The scale identifier string.

Return type

str

class MonteCarlo(investigated_object, num_samples, evaluators=None, min_num_samples=- 1, num_actors=0, console=None, section_block_size=10)

Bases: Generic[hermespy.core.monte_carlo.MO]

Grid of parameters over which to iterate the simulation.

Parameters
  • investigated_object (MO) – Object to be investigated during the simulation runtime.

  • num_samples (int) – Number of generated samples per grid element.

  • evaluators (Set[MonteCarloEvaluators[MO]]) – Evaluators used to process the investigated object sample state.

  • min_num_samples (int, optional) – Minimum number of generated samples per grid element.

  • num_actors (int, optional) – Number of dedicated actors spawned during simulation. By default, the number of actors will be the number of available CPU cores.

  • console (Console, optional) – Console the simulation writes to.

  • section_block_size (int, optional) – Number of samples per section block. 10 by default, although this number is somewhat arbitrary.

simulate(actor)

Launch the Monte Carlo simulation.

Parameters

actor (Type[MonteCarloActor]) – The actor from which to generate the simulation samples.

Returns

Generated samples.

Return type

np.ndarray

property investigated_object: Any

The object to be investigated during the simulation runtime.

Return type

Any

new_dimension(dimension, sample_points, considered_object=None)

Add a dimension to the simulation grid.

Must be a property of the investigated object.

Parameters
  • dimension (str) – String representation of dimension location relative to the investigated object.

  • sample_points (List[Any]) – List points at which the dimension will be sampled into a grid. The type of points must be identical to the grid arguments / type.

  • considered_object (Any, optional) – The object the dimension belongs to. Resolved to the investigated object by default, but may be an attribute or sub-attribute of the investigated object.

Return type

GridDimension

Returns

The newly created dimension object.

add_dimension(dimension)

Add a new dimension to the simulation grid.

Parameters

dimension (GridDimension) – Dimension to be added.

Raises

ValueError – If the dimension already exists within the grid.

Return type

None

add_evaluator(evaluator)

Add new evaluator to the Monte Carlo simulation.

Parameters

evaluator (Evaluator[MO]) – The evaluator to be added.

Return type

None

property num_samples: int

Number of samples per simulation grid element.

Returns

Number of samples

Return type

int

Raises

ValueError – If number of samples is smaller than one.

property min_num_samples: int

Minimum number of samples per simulation grid element.

Returns

Number of samples

Return type

int

Raises

ValueError – If number of samples is smaller than zero.

property max_num_samples: int

Maximum number of samples over the whole simulation.

Returns

Number of samples.

Return type

int

property num_actors: int

Number of dedicated actors spawned during simulation runs.

Returns

Number of actors.

Return type

int

Raises

ValueError – If the number of actors is smaller than zero.

property console: rich.console.Console

Console the Simulation writes to.

Returns

Handle to the console.

Return type

Console

property section_block_size: int

Number of generated samples per section block.

Returns

Number of samples per block.

Return type

int

Raises

ValueError – If the block size is smaller than one.

1(1,2,3)

Christian Bayer, Håkon Hoel, Erik von Schwerin, and Raúl Tempone. On Non-Asymptotic Optimal Stopping Criteria in Monte Carlo Simulations. SIAM Journal on Scientific Computing, 36(2):A869–A885, 2014. doi:10.1137/130911433.