PyMonte¶
PyMonte is a stand-alone core module of HermesPy, enabling efficient and flexible Monte Carlo simulations over arbitrary configuration parameter combinations. By wrapping the core of the Ray project, any object serializable by the pickle standard module can become a system model for a Monte Carlo style simulation campaign.
flowchart LR subgraph gridsection[Grid Section] parameter_a(Parameter) parameter_b(Parameter) end object((Investigated Object)) evaluator_a{{Evaluator}} evaluator_b{{Evaluator}} evaluator_c{{Evaluator}} subgraph sample[Sample] artifact_a[(Artifact)] artifact_b[(Artifact)] artifact_c[(Artifact)] end parameter_a --> object parameter_b --> object object ---> evaluator_a ---> artifact_a object ---> evaluator_b ---> artifact_b object ---> evaluator_c ---> artifact_c
Monte Carlo simulations usually sweep over multiple combinations of multiple parameters settings,
configuring the underlying system model and generating simulation samples from independent realizations
of the model state.
PyMonte refers to a single parameter combination as GridSection
,
with the set of all parameter combinations making up the simulation grid.
Each settable property of the investigated object is treated as a potential simulation parameter within the grid,
i.e. each settable property can be represented by an axis within the multidimensional simulation grid.
Evaluator
instances extract performance indicators from each investigated object realization, referred to as Artifact
.
A set of artifacts drawn from the same investigated object realization make up a single MonteCarloSample
.
During the execution of PyMonte simulations between \(M_\mathrm{min}\) and \(M_\mathrm{max}\)
are generated from investigated object realizations for each grid section.
The sample generation for each grid section may be aborted prematurely if all evaluators have reached a configured
confidence threshold
Refer to Bayer et al.[1] for a detailed description of the implemented algorithm.
flowchart LR controller{Simulation Controller} gridsection_a[Grid Section] gridsection_b[Grid Section] sample_a[Sample] sample_b[Sample] subgraph actor_a[Actor #1] object_a((Investigated Object)) end subgraph actor_b[Actor #N] object_b((Investigated Object)) end controller --> gridsection_a --> actor_a --> sample_a controller --> gridsection_b --> actor_b --> sample_b
The actual simulation workload distribution is visualized in the previous flowchart.
Using Ray, PyMonte spawns a number of MonteCarloActor
containers,
with the number of actors depending on the available resources (i.e. number of CPU cores) detected.
A central simulation controller schedules the workload by assigning GridSection
indices as tasks
to the actors, which return the resulting simulation Samples after the simulation iteration is completed.
- class ActorRunResult(samples=None, message=None)[source]¶
Bases:
object
Result object returned by generating a single sample from a simulation runner.
- Parameters:
samples (List[MonteCarloSample], optional) – Samples generated by the remote actor run.
message (str, optional) – Message return from the remote actor run.
-
samples:
Optional
[List
[MonteCarloSample
]]¶
- class Artifact[source]¶
Bases:
object
Result of an investigated object evaluation.
Generated by
Evaluator
instances operating on investigated object states. In other words,Evaluator.evaluate()
is expected to return an instance derived of this base class.Artifacts may, in general represent any sort of object. However, it is encouraged to provide a scalar floating-point representation for data visualization by implementing the
to_scalar()
method.
- class ArtifactTemplate(artifact)[source]¶
-
Scalar numerical result of an investigated object evaluation.
Implements the common case of an
Artifact
representing a scalar numerical value.- Parameters:
artifact (AT) – Artifact value.
- class Evaluation[source]¶
Bases:
Generic
[VT
],Visualizable
[VT
]Evaluation of a single simulation sample.
Evaluations are generated by
Evaluators
duringEvaluator.evaluate()
.
- class EvaluationResult(grid, evaluator=None)[source]¶
Bases:
Visualizable
[PlotVisualization
],ABC
Result of an evaluation routine iterating over a parameter grid.
Evaluation results are generated by
Evaluator Instances
as a final step within the evaluation routine.- Parameters:
grid (Sequence[GridDimension]) – Parameter grid over which the simulation generating this result iterated.
evaluator (Evaluator, optional) – Evaluator that generated this result. If not specified, the result is considered to be generated by an unknown evaluator.
- print(console=None)[source]¶
Print a readable version of this evaluation result.
- Parameters:
console (Console | None) – Rich console to print in. If not provided, a new one will be generated.
- Return type:
- abstract to_array()[source]¶
Convert the evaluation result raw data to an array representation.
Used to store the results in arbitrary binary file formats after simulation execution.
- Return type:
- Returns:
The array result representation.
- property grid: Sequence[GridDimension]¶
Paramter grid over which the simulation iterated.
- class EvaluationTemplate(evaluation)[source]¶
Bases:
Generic
[ET
,VT
],Evaluation
[VT
],ABC
Template class for simple evaluations containing a single object.
- Parameters:
evaluation (ET) – The represented evaluation.
- property evaluation: ET¶
The represented evaluation.
- class Evaluator[source]¶
Bases:
ABC
Evaluation routine for investigated object states, extracting performance indicators of interest.
Evaluators represent the process of extracting arbitrary performance indicator samples \(X_m\) in the form of
Artifact
instances from investigated object states. Once aMonteCarloActor
has set its investigated object to a new random state, it calls theevaluate()
routines of all configured evaluators, collecting the resulting respectiveArtifact
instances.For a given set of
Artifact
instances, evaluators are expected to report aconfidence_level()
which may result in a premature abortion of the sample collection routine for a singleGridSection
. By default, the routine suggested by Bayer et al.[1] is applied: Considering a tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) the confidence in the mean performance indicator\[\bar{X}_M = \frac{1}{M} \sum_{m = 1}^{M} X_m\]is considered sufficient if a threshold \(\delta \in \mathbb{R}_{++}\), defined as
\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right) \leq \delta\]has been crossed. The number of collected actually collected artifacts per
GridSection
\(M \in [M_{\mathrm{min}}, M_{\mathrm{max}}]\) is between a minimum number of required samples \(M_{\mathrm{min}} \in \mathbb{R}_{+}\) and an upper limit of \(M_{\mathrm{max}} \in \mathbb{R}_{++}\).- abstract evaluate()[source]¶
Evaluate the state of an investigated object.
Implements the process of extracting an arbitrary performance indicator, represented by the returned
Artifact
\(X_m\).Returns: Artifact \(X_m\) resulting from the evaluation.
- Return type:
- abstract generate_result(grid, artifacts)[source]¶
Generates an evaluation result from the artifacts collected over the whole simulation grid.
- Parameters:
grid (Sequence[GridDimension]) – The Simulation grid.
artifacts (numpy.ndarray) – Numpy object array whose dimensions represent grid dimensions.
- Return type:
- Returns:
The evaluation result.
- abstract property abbreviation: str¶
Short string representation of this evaluator.
Used as a label for console output and plot axes annotations.
- property confidence: float¶
Confidence threshold required for premature simulation abortion.
The confidence threshold \(\delta \in [0, 1]\) is the upper bound to the confidence level
\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right)\]at which the sample collection for a single
GridSection
may be prematurely aborted [1].- Raises:
ValueError – If confidence is lower than zero or greater than one.
- property plot_scale: str¶
Scale of the scalar evaluation plot.
Refer to the Matplotlib documentation for a list of a accepted values.
- Returns:
The scale identifier string.
- Return type:
- property tolerance: float¶
Tolerance level required for premature simulation abortion.
The tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) is the upper bound to the interval
\[\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\|\]by which the performance indicator estimation \(\bar{X}_M\) may diverge from the actual expected value \(\mathrm{E}\left[ X \right]\).
- Returns:
Non-negative tolerance \(\mathrm{TOL}\).
- Return type:
- Raises:
ValueError – If tolerance is negative.
- class GridDimension(considered_objects, dimension, sample_points, title=None, plot_scale=None, tick_format=None)[source]¶
Bases:
object
Single axis within the simulation grid.
A grid dimension represents a single simulation parameter that is to be varied during simulation runtime to observe its effects on the evaluated performance indicators. The values the represented parameter is configured to are
SamplePoints
.- Parameters:
considered_objects (Union[Any, Tuple[Any, ...]]) – The considered objects of this grid section.
dimension (str) – Path to the attribute.
sample_points (List[Any]) – Sections the grid is sampled at.
title (str, optional) – Title of this dimension. If not specified, the attribute string is assumed.
plot_scale (str, optional) – Scale of the axis within plots.
tick_format (ValueType, optional) – Format of the tick labels. Linear by default.
- Raises:
ValueError – If the selected dimension does not exist within the considered_object.
- configure_point(point_index)[source]¶
Configure a specific sample point.
- Parameters:
point_index (int) – Index of the sample point to configure.
- Raises:
ValueError – For invalid indexes.
- Return type:
- property first_impact: str | None¶
Index of the first impacted simulation pipeline stage.
- Returns:
Pipeline stage index. None, if the stage is unknown.
- property last_impact: str | None¶
Index of the last impacted simulation pipeline stage.
- Returns:
Pipeline stage index. None, if the stage is unknown.
- property num_sample_points: int¶
Number of dimension sample points.
- Returns:
Number of sample points.
- Return type:
- property plot_scale: str¶
Scale of the scalar evaluation plot.
Refer to the Matplotlib documentation for a list of a accepted values.
- Returns:
The scale identifier string.
- Return type:
- property sample_points: List[SamplePoint]¶
Points at which this grid dimension is sampled.
- class GridSection(coordinates, num_evaluators)[source]¶
Bases:
object
- Parameters:
coordinates (Tuple[int, ...]) – Section indices within the simulation grid.
- add_samples(samples, evaluators)[source]¶
Add a new sample to this grid section collection.
- Parameters:
samples (Union[MonteCarloSample, Sequence[MonteCarloSample])) – Samples to be added to this section.
evaluators (Sequence[Evaluator]) – References to the evaluators generating the sample artifacts.
- Raises:
ValueError – If the number of artifacts in sample does not match the initialization.
- Return type:
- confidence_status(evaluators)[source]¶
Check if each evaluator has reached its required confidence thershold.
Conidence indicates that the simulation for the parameter combination this grid section represents may be aborted, i.e. no more samples are required.
- Parameters:
evaluators (Sequence[Evaluator]) – Evaluators giving feedback about their cofidence status.
- Return type:
Returns: Confidence indicator.
- property confidences: ndarray¶
Confidence in the estimated evaluations.
Returns: Array indicating probabilities for each evaluator
- property coordinates: Tuple[int, ...]¶
Grid section coordinates within the simulation grid.
- Returns:
Section coordinates.
- Return type:
Tuple[int, …]
- property num_samples: int¶
Number of already generated samples within this section.
- Returns:
Number of samples.
- Return type:
- property samples: List[MonteCarloSample]¶
The collected evaluation samples within this grid section.
Returns: List of samples.
- class MonteCarlo(investigated_object, num_samples, evaluators=None, min_num_samples=-1, num_actors=None, console=None, console_mode=ConsoleMode.INTERACTIVE, section_block_size=None, ray_address=None, cpus_per_actor=1, runtime_env=False, catch_exceptions=True, progress_log_interval=1.0)[source]¶
Bases:
Generic
[MO
]Grid of parameters over which to iterate the simulation.
- Parameters:
investigated_object (MO) – Object to be investigated during the simulation runtime.
num_samples (int) – Number of generated samples per grid element.
evaluators (Set[MonteCarloEvaluators[MO]]) – Evaluators used to process the investigated object sample state.
min_num_samples (int, optional) – Minimum number of generated samples per grid element.
num_actors (int, optional) – Number of dedicated actors spawned during simulation. By default, the number of actors will be the number of available CPU cores.
console (Console, optional) – Console the simulation writes to.
console_mode (ConsoleMode, optional) – The printing behaviour of the simulation during runtime.
section_block_size (int, optional) – Number of samples per section block. By default, the size of the simulation grid is selected.
ray_address (str, optional) – The address of the ray head node. If None is provided, the head node will be launched in this machine.
cpus_per_actor (int, optional) – Number of CPU cores reserved per actor. One by default.
runtime_env (bool, optional) – Create a virtual environment on each host. Disabled by default.
catch_exceptions (bool, optional) – Catch exceptions occuring during simulation runtime. Enabled by default.
progress_log_interval (float, optional) – Interval between result logs in seconds. 1 second by default.
- add_dimension(dimension)[source]¶
Add a new dimension to the simulation grid.
- Parameters:
dimension (
GridDimension
) – Dimension to be added.- Raises:
ValueError – If the dimension already exists within the grid.
- Return type:
- new_dimension(dimension, sample_points, *args, **kwargs)[source]¶
Add a dimension to the simulation grid.
Must be a property of the investigated object.
- Parameters:
dimension (str) – String representation of dimension location relative to the investigated object.
sample_points (List[Any]) – List points at which the dimension will be sampled into a grid. The type of points must be identical to the grid arguments / type.
*args (Tuple[Any], optional) – References to the object the dimension belongs to. Resolved to the investigated object by default, but may be an attribute or sub-attribute of the investigated object.
**kwargs – Additional initialization arguments passed to
GridDimension
.
- Return type:
- Returns:
The newly created dimension object.
- remove_dimension(dimension)[source]¶
Remove an existing dimension from the simulation grid.
- Parameters:
dimension (GridDimension) – The dimension to be removed.
- Raises:
ValueError – If the dimension does not exist.
- Return type:
- simulate(actor, additional_dimensions=None, stage_arguments=None)[source]¶
Launch the Monte Carlo simulation.
- Parameters:
actor (Type[MonteCarloActor]) – The actor from which to generate the simulation samples.
additional_dimensions (Set[GridDimension], optional) – Additional dimensions to be added to the simulation grid.
stage_arguments (Mapping[str, Any], optional) – Arguments to be passed to the simulation stages. If the argument is a sequence, the respective stage will iterate over the sequence.
- Return type:
Returns: A MonteCarloResult dataclass containing the simulation results.
- property console: Console¶
Console the Simulation writes to.
- Returns:
Handle to the console.
- Return type:
Console
- property console_mode: ConsoleMode¶
Console mode during simulation runtime.
Returms: The current console mode.
- property cpus_per_actor: int¶
Number of CPU cores reserved for each actor.
- Returns:
Number of cores.
- Raises:
ValueError – If the number of cores is smaller than one
- property dimensions: List[GridDimension]¶
Simulation grid dimensions which make up the grid.
- property evaluators: List[Evaluator]¶
Evaluators used to process the investigated object sample state.
- property max_num_samples: int¶
Maximum number of samples over the whole simulation.
- Returns:
Number of samples.
- Return type:
- property min_num_samples: int¶
Minimum number of samples per simulation grid element.
- Returns:
Number of samples
- Return type:
- Raises:
ValueError – If number of samples is smaller than zero.
- property num_actors: int¶
Number of dedicated actors spawned during simulation runs.
- Returns:
Number of actors.
- Return type:
- Raises:
ValueError – If the number of actors is smaller than zero.
- property num_samples: int¶
Number of samples per simulation grid element.
- Returns:
Number of samples
- Return type:
- Raises:
ValueError – If number of samples is smaller than one.
- property section_block_size: int¶
Number of generated samples per section block.
- Returns:
Number of samples per block.
- Return type:
- Raises:
ValueError – If the block size is smaller than one.
- class MonteCarloActor(argument_tuple, index, stage_arguments=None, catch_exceptions=True)[source]¶
Bases:
Generic
[MO
]Monte Carlo Simulation Actor.
Actors are essentially workers running in a private process executing simulation tasks. The result of each individual simulation task is a simulation sample.
- Parameters:
argument_tuple (
Tuple
[TypeVar
(MO
),Sequence
[GridDimension
],Sequence
[Evaluator
]]) – Object to be investigated during the simulation runtime. Dimensions over which the simulation will iterate. Evaluators used to process the investigated object sample state.index (int) – Global index of the actor.
stage_arguments (Mapping[str, Sequence[Tuple]], optional) – Arguments for the simulation stages.
catch_exceptions (bool, optional) – Catch exceptions during run. Enabled by default.
- run(program, *, _ray_trace_ctx=None)[source]¶
Run the simulation actor.
- Parameters:
program (List[Tuple[int, ...]]) – A list of simulation grid section indices for which to collect samples.
- Return type:
- Returns:
A list of generated
MonteCarloSample`s. Contains the same number of entries as `program
.
- abstract stage_executors()[source]¶
List of simulation stage execution callbacks.
Simulations stages will be executed in the order specified here.
- class MonteCarloResult(grid, evaluators, sample_grid, performance_time)[source]¶
Bases:
object
Result of a Monte Carlo simulation.
- Parameters:
grid (Sequence[GridDimension]) – Dimensions over which the simulation has swept.
evaluators (Sequence[Evaluator]) – Evaluators used to evaluated the simulation artifacts.
sample_grid (SampleGrid) – Grid containing evaluation artifacts collected over the grid iterations.
performance_time (float) – Time required to compute the simulation.
- Raises:
ValueError – If the dimensions of samples do not match the supplied sweeping dimensions and evaluators.
- plot()[source]¶
Plot evaluation figures for all contained evaluator artifacts.
Returns: Container of all generated plots.
- Return type:
- print(console=None)[source]¶
Print a text representation of the simulation result.
- Parameters:
console (Console, optional) – Rich console to print to. If not provided, a new one will be initialized.
- Return type:
- property evaluation_results: Sequence[EvaluationResult]¶
Access individual evaluation results.
Returns: List of evaluation results.
- class MonteCarloSample(grid_section, sample_index, artifacts)[source]¶
Bases:
object
Single sample of a Monte Carlo simulation.
- Parameters:
- property artifact_scalars: ndarray¶
Collect scalar artifact representations.
- Returns:
Vector of scalar artifact representations.
- Return type:
np.ndarray
- property artifacts: Sequence[Artifact]¶
Artifacts resulting from the sample’s evaluations.
- Returns:
List of artifacts.
- Return type:
List[Artifact]
- property grid_section: Tuple[int, ...]¶
Grid section from which this sample was generated.
- Returns:
Tuple of grid section indices.
- Return type:
Tuple[int, …]
- class RegisteredDimension(_property, first_impact=None, last_impact=None, title=None)[source]¶
Bases:
property
Register a class property getter as a PyMonte simulation dimension.
Registered properties may specify their simulation stage impacts and therefore significantly increase simulation runtime in cases where computationally demanding section re-calculations can be reduced.
- Parameters:
first_impact (str, optional) – Name of the first simulation stage within the simulation pipeline which is impacted by manipulating this property. If not specified, the initial stage is assumed.
last_impact (str, optional) – Name of the last simulation stage within the simulation pipeline which is impacted by manipulating this property. If not specified, the final stage is assumed.
title (str, optional) – Displayed title of the dimension. If not specified, the dimension’s name will be assumed.
- deleter(fdel)[source]¶
Descriptor to obtain a copy of the property with a different deleter.
- Return type:
- getter(fget)[source]¶
Descriptor to obtain a copy of the property with a different getter.
- Return type:
- classmethod is_registered(object)[source]¶
Check if any object is a registered PyMonte simulation dimension.
- Parameters:
object (Any) – The object in question.
- Return type:
- Returns:
A boolean indicator.
- class SampleGrid(grid_configuration, evaluators)[source]¶
Bases:
object
Grid of simulation samples.
- Parameters:
grid_configuration (List[GridDimension]) – The simulation grid configuration.
evaluators (Sequence[Evaluator]) – The evaluators generating the artifacts.
- class SamplePoint(value, title=None)[source]¶
Bases:
object
Sample point of a single grid dimension.
A single
GridDimension
holds a sequence of sample points accesible by thesample_points
property. During simulation runtime, the simulation will dynamically reconfigure the scenario selecting a single sample point out of eachGridDimension
per generated simulation sample.- Parameters:
value (Any) – Sample point value with which to configure the grid dimension.
title (str, optional) – String representation of this sample point. If not specified, the simulation will attempt to infer an adequate representation.
- class ScalarDimension[source]¶
Bases:
ABC
Base class for objects that can be configured by scalar values.
When a property of type
ScalarDimension
is defined as a simulation parameterGridDimension
, the simulation will automatically configure the object with the scalar value of the sample point during simulation runtime.The configuration operation is represented by the lshift operator <<.
- class ScalarEvaluationResult(grid, scalar_results, evaluator, plot_surface=True)[source]¶
Bases:
EvaluationResult
Base class for scalar evaluation results.
- Parameters:
grid (Sequence[GridDimension]) – Simulation grid.
scalar_results (numpy.ndarray) – Scalar results generated from collecting samples over the simulation grid.
evaluator (Evaluator) – The evaluator generating the results.
plot_surface (bool, optional) – Enable surface plotting for two-dimensional grids. Enabled by default.
- classmethod From_Artifacts(grid, artifacts, evaluator, plot_surface=True)[source]¶
Generate a scalar evaluation result from a set of artifacts.
- Parameters:
grid (Sequence[GridDimension]) – The simulation grid.
artifacts (numpy.ndarray) – Numpy object array whose dimensions represent grid dimensions.
evaluator (Evaluator) – The evaluator generating the artifacts.
plot_surface (bool) – Whether to plot the result as a surface plot.
- Return type:
TypeVar
(SERT
, bound= ScalarEvaluationResult)- Returns:
The scalar evaluation result.
- create_figure(**kwargs)[source]¶
Create a new figure for plotting.
Returns: Newly generated figure and axes to plot into.
- register(*args, **kwargs)[source]¶
Shorthand to register a property as a MonteCarlo dimension.
- Parameters:
_property (property) – The property to be registered.
- Return type: