Evaluator#
- class Evaluator[source]#
Bases:
ABC
Evaluation routine for investigated object states, extracting performance indicators of interest.
Evaluators represent the process of extracting arbitrary performance indicator samples \(X_m\) in the form of
Artifact
instances from investigated object states. Once aMonteCarloActor
has set its investigated object to a new random state, it calls theevaluate()
routines of all configured evaluators, collecting the resulting respectiveArtifact
instances.For a given set of
Artifact
instances, evaluators are expected to report aconfidence_level()
which may result in a premature abortion of the sample collection routine for a singleGridSection
. By default, the routine suggested by Bayer et al.[1] is applied: Considering a tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) the confidence in the mean performance indicator\[\bar{X}_M = \frac{1}{M} \sum_{m = 1}^{M} X_m\]is considered sufficient if a threshold \(\delta \in \mathbb{R}_{++}\), defined as
\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right) \leq \delta\]has been crossed. The number of collected actually collected artifacts per
GridSection
\(M \in [M_{\mathrm{min}}, M_{\mathrm{max}}]\) is between a minimum number of required samples \(M_{\mathrm{min}} \in \mathbb{R}_{+}\) and an upper limit of \(M_{\mathrm{max}} \in \mathbb{R}_{++}\).- abstract evaluate()[source]#
Evaluate the state of an investigated object.
Implements the process of extracting an arbitrary performance indicator, represented by the returned
Artifact
\(X_m\).Returns: Artifact \(X_m\) resulting from the evaluation.
- Return type:
- abstract generate_result(grid, artifacts)[source]#
Generates an evaluation result from the artifacts collected over the whole simulation grid.
- Parameters:
grid (Sequence[GridDimension]) – The Simulation grid.
artifacts (np.ndarray) – Numpy object array whose dimensions represent grid dimensions.
- Return type:
- Returns:
The evaluation result.
- abstract property abbreviation: str#
Short string representation of this evaluator.
Used as a label for console output and plot axes annotations.
- property confidence: float#
Confidence threshold required for premature simulation abortion.
The confidence threshold \(\delta \in [0, 1]\) is the upper bound to the confidence level
\[\mathrm{P}\left(\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\| > \mathrm{TOL} \right)\]at which the sample collection for a single
GridSection
may be prematurely aborted [1].- Raises:
ValueError – If confidence is lower than zero or greater than one.
- property plot_scale: str#
Scale of the scalar evaluation plot.
Refer to the Matplotlib documentation for a list of a accepted values.
- Returns:
The scale identifier string.
- Return type:
- property tolerance: float#
Tolerance level required for premature simulation abortion.
The tolerance \(\mathrm{TOL} \in \mathbb{R}_{++}\) is the upper bound to the interval
\[\left\| \bar{X}_M - \mathrm{E}\left[ X \right] \right\|\]by which the performance indicator estimation \(\bar{X}_M\) may diverge from the actual expected value \(\mathrm{E}\left[ X \right]\).
- Returns:
Non-negative tolerance \(\mathrm{TOL}\).
- Return type:
- Raises:
ValueError – If tolerance is negative.