API Reference

This is the class and function reference of batman. Please refer to previous sections for further details, as the class and function raw specifications may not be enough to give full guidelines on their uses.

batman.space: Parameter space

space.Point Point class.
space.Space(corners[, sample, nrefine, …]) Manages the space of parameters.
space.Doe(n_samples, bounds, kind[, dists, …]) DOE class.
space.Refiner(data, corners[, delta_space, …]) Resampling the space of parameters.

Space module

class batman.space.Space(corners, sample=inf, nrefine=0, plabels=None, multifidelity=None, duplicate=False)[source]

Manages the space of parameters.

__init__(corners, sample=inf, nrefine=0, plabels=None, multifidelity=None, duplicate=False)[source]

Generate a Space.

Parameters:
  • corners (array_like) – hypercube ([min, n_features], [max, n_features]).
  • sample (int/array_like) – number of sample or list of sample of shape (n_samples, n_features).
  • nrefine (int) – number of point to use for refinement.
  • plabels (list(str)) – parameters’ names.
  • multifidelity (list(float)) – Whether to consider the first parameter as the fidelity level. It is a list of [‘cost_ratio’, ‘grand_cost’].
  • duplicate (bool) – Whether to allow duplicate points in space.
discrepancy(sample=None)[source]

Compute the centered discrepancy.

Returns:Centered discrepancy.
Return type:float.
empty()[source]

Remove all points.

is_full()[source]

Return whether the maximum number of points is reached.

logger = <logging.Logger object>
optimization_results(extremum)[source]

Compute the optimal value.

Parameters:extremum (str) – minimization or maximization objective [‘min’, ‘max’].
read(path)[source]

Read space from the file path.

refine(surrogate, method, point_loo=None, delta_space=0.08, dists=None, hybrid=None, discrete=None, extremum='min')[source]

Refine the sample, update space points and return the new point(s).

Parameters:
  • surrogate (batman.surrogate.SurrogateModel.) – Surrogate.
  • method (str) – Refinement method.
  • point_loo (array_like) – Leave-one-out worst point (n_features,).
  • delta_space (float) – Shrinking factor for the parameter space.
  • dists (lst(str)) – List of valid openturns distributions as string.
  • int)) hybrid (lst(lst(str,) – Navigator as list of [Method, n].
  • discrete (int) – Index of the discrete variable.
  • extremum (str) – Minimization or maximization objective [‘min’, ‘max’].
Returns:

List of points to add.

Return type:

Element or list of batman.space.Point.

sampling(n_samples=None, kind='halton', dists=None, discrete=None)[source]

Create point samples in the parameter space.

Minimum number of samples for halton and sobol: 4 For uniform sampling, the number of points is per dimensions. The points are registered into the space and replace existing ones.

Parameters:
  • n_samples (int) – number of samples.
  • kind (str) – method of sampling.
  • dists (lst(str)) – List of valid openturns distributions as string.
  • discrete (int) – index of the discrete variable
Returns:

List of points.

Return type:

self.

write(path)[source]

Write space in file.

After writting points, it plots them.

Parameters:path (str) – folder to save the points in.
class batman.space.Doe(n_samples, bounds, kind, dists=None, discrete=None)[source]

DOE class.

__init__(n_samples, bounds, kind, dists=None, discrete=None)[source]

Initialize the DOE generation.

In case of kind is uniform, n_samples is decimated in order to have the same number of points in all dimensions.

If kind is discrete, a join distribution between a discrete uniform distribution is made with continuous distributions.

Another possibility is to set a list of PDF to sample from. Thus one can do: dists=[‘Uniform(15., 60.)’, ‘Normal(4035., 400.)’]. If not set, uniform distributions are used.

Parameters:
  • n_samples (int) – number of samples.
  • bounds (array_like) – Space’s corners [[min, n dim], [max, n dim]]
  • kind (str) – Sampling Method if string can be one of [‘halton’, ‘sobol’, ‘faure’, ‘lhs[c]’, ‘sobolscramble’, ‘uniform’, ‘discrete’] otherwize can be a list of openturns distributions.
  • dists (lst(str)) – List of valid openturns distributions as string.
  • discrete (int) – Position of the discrete variable.
generate()[source]

Generate the DOE.

Returns:Sampling.
Return type:array_like (n_samples, n_features)
logger = <logging.Logger object>
scramble(x)[source]

Scramble function.

scrambled_sobol_generate()[source]

Scrambled Sobol.

Scramble function as in Owen (1997).

uniform()[source]

Uniform sampling.

class batman.space.Point[source]

Point class.

logger = <logging.Logger object>
classmethod set_threshold(threshold)[source]

Set the threshold for comparing points.

threshold = 0.0

Maximum distance error when comparing 2 points.

class batman.space.Refiner(data, corners, delta_space=0.08, discrete=None)[source]

Resampling the space of parameters.

__init__(data, corners, delta_space=0.08, discrete=None)[source]

Initialize the refiner with the Surrogate and space corners.

Points data are scaled between [0, 1] based on the size of the corners taking into account a :param:delta_space factor.

Parameters:
discrepancy()[source]

Find the point that minimize the discrepancy.

Returns:The coordinate of the point to add.
Return type:lst(float)
distance_min(point)[source]

Get the distance of influence.

Compute the distance, Linf norm between the anchor point and every sampling points. Linf allows to add this lenght to all coordinates and ensure that no points will be within this hypercube. It returns the minimal distance. point needs to be scaled by self.corners so the returned distance is scaled.

Parameters:point (array_like) – Anchor point.
Returns:The distance to the nearest point.
Return type:float.
extrema(refined_pod_points)[source]

Find the min or max point.

Using an anchor point based on the extremum value at sample points, search the hypercube around it. If a new extremum is found,it uses Nelder-Mead method to add a new point. The point is then bounded back by the hypercube.

Returns:The coordinate of the point to add
Return type:lst(float)
func(coords, sign=1)[source]

Get the prediction for a given point.

Retrieve Gaussian Process estimation. The function returns plus or minus the function depending on the sign. -1 if we want to find the max and 1 if we want the min.

Parameters:
  • coords (lst(float)) – coordinate of the point
  • sign (float) – -1. or 1.
Returns:

L2 norm of the function at the point

Return type:

float

hybrid(refined_pod_points, point_loo, method, dists)[source]

Composite resampling strategy.

Uses all methods one after another to add new points. It uses the navigator defined within settings file.

Parameters:
  • refined_pod_points (lst(int)) – points idx not to consider for extrema
  • point_loo (batman.space.point.Point) – leave one out point
  • strategy (str) – resampling method
  • dists (lst(str)) – List of valid openturns distributions as string.
Returns:

The coordinate of the point to add

Return type:

lst(float)

hypercube_distance(point, distance)[source]

Get the hypercube to add a point in.

Propagate the distance around the anchor. point is scaled by self.corners and input distance has to be. Ensure that new values are bounded by corners.

Parameters:
  • point (array_like) – Anchor point.
  • distance (float) – The distance of influence.
Returns:

The hypercube around the point.

Return type:

array_like.

hypercube_optim(point)[source]

Get the hypercube to add a point in.

Compute the largest hypercube around the point based on the L2-norm. Ensure that only the leave-one-out point lies within it. Ensure that new values are bounded by corners.

Parameters:point (np.array) – Anchor point.
Returns:The hypercube around the point (a point per column).
Return type:array_like.
leave_one_out_sigma(point_loo)[source]

Mixture of Leave-one-out and Sigma.

Estimate the quality of the POD by leave-one-out cross validation (LOOCV), and add a point arround the max error point. The point is added within an hypercube around the max error point. The size of the hypercube is equal to the distance with the nearest point.

Parameters:point_loo (tuple) – leave-one-out point.
Returns:The coordinate of the point to add.
Return type:lst(float)
leave_one_out_sobol(point_loo, dists)[source]

Mixture of Leave-one-out and Sobol’ indices.

Same as function leave_one_out_sigma() but change the shape of the hypercube. Using Sobol’ indices, the corners are shrinked by the corresponding percentage of the total indices.

Parameters:
  • point_loo (tuple) – leave-one-out point.
  • dists (lst(str)) – List of valid openturns distributions as string.
Returns:

The coordinate of the point to add.

Return type:

lst(float)

logger = <logging.Logger object>
optimization(method='EI', extremum='min')[source]

Maximization of the Probability/Expected Improvement.

Parameters:
  • method (str) – Flag [‘EI’, ‘PI’].
  • extremum (str) – minimization or maximization objective [‘min’, ‘max’].
Returns:

The coordinate of the point to add.

Return type:

lst(float)

pred_sigma(coords)[source]

Prediction and sigma.

Same as Refiner.func() and Refiner.func_sigma(). Function prediction and sigma are weighted using POD modes.

Parameters:coords (lst(float)) – coordinate of the point
Returns:sum_f and sum_sigma
Return type:floats
sigma(hypercube=None)[source]

Find the point at max Sigma.

It returns the point where the variance (sigma) is maximum. To do so, it uses Gaussian Process information. A genetic algorithm get the global maximum of the function.

Parameters:hypercube (array_like) – Corners of the hypercube.
Returns:The coordinate of the point to add.
Return type:lst(float)
sigma_discrepancy(weights=None)[source]

Maximization of the composite indicator: sigma - discrepancy.

Parameters:weights (list(float)) – respectively weights of sigma and discrepancy.
Returns:The coordinate of the point to add.
Return type:lst(float)
batman.space.dists_to_ot(dists)[source]

Convert distributions to openTURNS.

The list of distribution is converted to openTURNS objects.

Example:
>> from batman.space import dists_to_ot
>> dists = dists_to_ot(['Uniform(12, 15)', 'Normal(400, 10)'])
Parameters:dists (list(str)) – Distributions available in openTURNS.
Returns:List of openTURNS distributions.
Return type:list(openturns.Distribution)

batman.surrogate: Surrogate Modelling

surrogate.SurrogateModel(kind, corners, **kwargs) Surrogate model.
surrogate.Kriging(sample, data[, kernel, …]) Kriging based on Gaussian Process.
surrogate.PC(strategy, degree, distributions) Polynomial Chaos class.
surrogate.RBFnet(trainIn, trainOut[, …]) RBF class.

Surrogate model module

class batman.surrogate.RBFnet(trainIn, trainOut, regparam=0.0, radius=1.5, regtree=0, function='default', Pmin=2, Radscale=1.0)[source]

RBF class.

Calc_moyenne(Point)[source]
RBFout(Point, neuroNum)[source]
__init__(trainIn, trainOut, regparam=0.0, radius=1.5, regtree=0, function='default', Pmin=2, Radscale=1.0)[source]

Initialization.

initialise le reseau principal a partir d’un tableau de points d’entrainements de taille Setsize*(Ninputs) pour les inputs et Setsize*(Noutputs) pour les outputs ds trainOut possibilite d’utiliser d’arbre de regression sur les donnees (regtree=1) le reseau est ensuite entraine sur cet ensemble avec le parametere Regparam

initialisation trainIn est copie pour le pas affecter l’original dans le prog appelant

coefs_mean()[source]

Mean coefficients.

fonction permettant de calculer le plan moyen a partir de laquelle partent les gaussiennes par regression lineaire multiple

compute_radius(cel)[source]

Radius.

calcule le rayon pour la cellule i lorsque on utilise pas l’arbre de regression ce rayon est calcule comme la demi distance a la plus proche cellule

evaluate(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
trainNet()[source]

Train.

entrainement du reseau de neurone principal : calcul des poids trainIn tableau de points d’entrainements de taille Setsize*(Ninputs) pour les inputs et Setsize pour les outputs ds trainOut

class batman.surrogate.Kriging(sample, data, kernel=None, noise=False, global_optimizer=True)[source]

Kriging based on Gaussian Process.

__init__(sample, data, kernel=None, noise=False, global_optimizer=True)[source]

Create the predictor.

Uses sample and data to construct a predictor using Gaussian Process. Input is to be normalized before and depending on the number of parameters, the kernel is adapted to be anisotropic.

self.data contains the predictors as a list(array) of the size of the ouput. A predictor per line of data is created. This leads to a line of predictors that predicts a new column of data.

If noise is a float, it will be used as noise_level by sklearn.gaussian_process.kernels.WhiteKernel. Otherwise, if noise is True, default values are use for the WhiteKernel. If noise is False, no noise is added.

A multiprocessing strategy is used:

  1. Create a process per mode, do not create if only one,
  2. Create n_restart (3 by default) processes by process.

In the end, there is N=n_{restart} \times n_{modes}) processes. If there is not enought CPU, N=\frac{n_{cpu}}{n_{restart}}.

Parameters:
  • sample (array_like) – Sample used to generate the data (n_samples, n_features).
  • data (array_like) – Observed data (n_samples, n_features).
  • kernel (sklearn.gaussian_process.kernels.*.) – Kernel from scikit-learn.
  • noise (float/bool) – Noise used into kriging.
  • global_optimizer (bool) – Whether to do global optimization or gradient based optimization to estimate hyperparameters.
evaluate(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
logger = <logging.Logger object>
class batman.surrogate.SklearnRegressor(sample, data, regressor)[source]

Interface to Scikit-learn regressors.

__init__(sample, data, regressor)[source]

Create the predictor.

Uses sample and data to construct a predictor using sklearn. Input is to be normalized before and depending on the number of parameters, the kernel is adapted to be anisotropic.

Parameters:
  • sample (array_like) – Sample used to generate the data (n_samples, n_features).
  • data (array_like) – Observed data (n_samples, n_features).
  • regressor (Either regressor object or str(sklearn.ensemble.Regressor)) – Scikit-Learn regressor.
evaluate(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
logger = <logging.Logger object>
class batman.surrogate.PC(strategy, degree, distributions, sample=None, stieltjes=True)[source]

Polynomial Chaos class.

__init__(strategy, degree, distributions, sample=None, stieltjes=True)[source]

Generate truncature and projection strategies.

Allong with the strategies the sample is storred as an attribute: sample as well as the weights: weights.

Parameters:
  • strategy (str) – Least square or Quadrature [‘LS’, ‘Quad’].
  • degree (int) – Polynomial degree.
  • distributions (lst(openturns.Distribution)) – Distributions of each input parameter.
  • sample (int) – Samples for least square.
  • stieltjes (bool) – Wether to use Stieltjes algorithm for the basis.
evaluate(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
fit(sample, data)[source]

Create the predictor.

The result of the Polynomial Chaos is stored as pc_result and the surrogate is stored as pc.

Parameters:
  • sample (array_like) – The sample used to generate the data (n_samples, n_features).
  • data (array_like) – The observed data (n_samples, [n_features]).
logger = <logging.Logger object>
class batman.surrogate.SurrogateModel(kind, corners, **kwargs)[source]

Surrogate model.

__call__(points)[source]

Predict snapshots.

Parameters:
  • points (batman.space.Point or array_like (n_samples, n_features).) – point(s) to predict.
  • path (str) – if not set, will return a list of predicted snapshots instances, otherwise write them to disk.
Returns:

Result.

Return type:

array_like (n_samples, n_features)

Returns:

Standard deviation.

Return type:

array_like (n_samples, n_features)

__init__(kind, corners, **kwargs)[source]

Init Surrogate model.

Parameters:
  • kind (str) – name of prediction method, rbf or kriging.
  • corners (array_like) – hypercube ([min, n_features], [max, n_features]).
  • **kwargs – See below
Keyword Arguments:
 

For Polynomial Chaos the following keywords are available

  • strategy (str) – Least square or Quadrature [‘LS’, ‘Quad’].
  • degree (int) – Polynomial degree.
  • distributions (lst(openturns.Distribution)) – Distributions of each input parameter.
  • n_samples (int) – Number of samples for least square.

For Kriging the following keywords are available

  • kernel (sklearn.gaussian_process.kernels.*) – Kernel.
  • noise (float/bool) – noise level.
estimate_quality(method='LOO')[source]

Estimate quality of the model.

Parameters:method (str) – method to compute quality [‘LOO’, ‘ValidationSet’].
Returns:Q2 error.
Return type:float.
Returns:Max MSE point.
Return type:lst(float)
fit(sample, data, pod=None)[source]

Construct the surrogate.

Parameters:
  • sample (array_like) – sample of the sample (n_samples, n_features).
  • data (array_like) – function evaluations (n_samples, n_features).
  • pod (batman.pod.Pod.) – POD instance.
logger = <logging.Logger object>
read(fname)[source]

Load model, data and space from disk.

Parameters:fname (str) – path to a directory.
write(fname)[source]

Save model, data and space to disk.

Parameters:fname (str) – path to a directory.
class batman.surrogate.Evofusion(sample, data)[source]

Multifidelity algorithm using Evofusion.

__init__(sample, data)[source]

Create the predictor.

Data are arranged as decreasing fidelity. Hence, sample[0] corresponds to the highest fidelity.

Parameters:
  • sample (array_like) – The sample used to generate the data. (fidelity, n_samples, n_features)
  • data (array_like) – The observed data. (fidelity, n_samples, [n_features])
evaluate(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
logger = <logging.Logger object>

batman.uq: Uncertainty Quantification

uq.UQ(surrogate[, dists, nsample, method, …]) Uncertainty Quantification class.

UQ module

class batman.uq.UQ(surrogate, dists=None, nsample=5000, method='sobol', indices='aggregated', space=None, data=None, plabels=None, xlabel=None, flabel=None, xdata=None, fname=None, test=None)[source]

Uncertainty Quantification class.

__init__(surrogate, dists=None, nsample=5000, method='sobol', indices='aggregated', space=None, data=None, plabels=None, xlabel=None, flabel=None, xdata=None, fname=None, test=None)[source]

Init the UQ class.

From the settings file, it gets:

  • Method to use for the Sensitivity Analysis (SA),
  • Type of Sobol’ indices to compute,
  • Number of points per sample to use for SA (N(2p+2) predictions), resulting storage is 6N(out+p)*8 octets => 184Mo if N=1e4
  • Method to use to predict a new snapshot,
  • The list of input variables,
  • The lengh of the output function.

Also, it creates the model and int_model as openturns.PythonFunction.

Parameters:
  • surrogate (class:batman.surrogate.SurrogateModel.) – Surrogate model.
  • space (class:batman.space.Space.) – sample space (can be a list).
  • data (array_like) – Snapshot’s data (n_samples, n_features).
  • plabels (list(str)) – parameters’ names.
  • xlabel (str) – label of the discretization parameter.
  • flabel (str) – name of the quantity of interest.
  • xdata (array_like) – 1D discretization of the function (n_features,).
  • fname (str) – folder output path.
  • test (str) – Test function from class:batman.functions.
error_model(indices, function)[source]

Compute the error between the POD and the analytic function.

Warning

For test purpose only. Choises are Ishigami, Rosenbrock, Michalewicz, G_Function and Channel_Flow test functions.

From the surrogate of the function, evaluate the error using the analytical evaluation of the function on the sample points.

Q^2 = 1 - \frac{err_{l2}}{var_{model}}

Knowing that err_{l2} = \sum \frac{(prediction - reference)^2}{n}, var_{model} = \sum \frac{(prediction - mean)^2}{n}

Also, it computes the mean square error on the Sobol first andtotal order indices.

A summary is written within model_err.dat.

Parameters:
  • indices (array_like) – Sobol first order indices computed using the POD.
  • function (str) – name of the analytic function.
Returns:

err_q2, mse, s_l2_2nd, s_l2_1st, s_l2_total.

Return type:

array_like.

error_propagation()[source]

Compute the moments.

1st, 2nd order moments are computed for every output of the function. Also compute the PDF for these outputs, and compute correlations (YY and XY) and correlation (YY). Both exported as 2D cartesian plots. Files are respectivelly:

  • pdf-moment.dat, moments [discretized on curvilinear abscissa]
  • pdf.dat -> the PDFs [discretized on curvilinear abscissa]
  • correlation_covariance.dat -> correlation and covariance YY
  • correlation_XY.dat -> correlation XY
  • pdf.pdf, plot of the PDF (with moments if dim > 1)
func(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
int_func(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
logger = <logging.Logger object>
sobol()[source]

Compute Sobol’ indices.

It returns the second, first and total order indices of Sobol’. Two methods are possible for the indices:

  • sobol
  • FAST

Warning

The second order indices are only available with the sobol method. Also, when there is no surrogate (ensemble mode), FAST is not available and the DoE must have been generated with saltelli.

And two types of computation are availlable for the global indices:

  • block
  • aggregated

If aggregated, map indices are computed. In case of a scalar value, all types returns the same values. block indices are written within sensitivity.dat and aggregated indices within sensitivity_aggregated.dat.

Finally, it calls error_pod() in order to compare the indices with their analytical values.

Returns:Sobol’ indices.
Return type:array_like.

batman.visualization: Uncertainty Visualization

visualization.Kiviat3D(sample, data[, …]) 3D version of the Kiviat plot.
visualization.Tree(sample, data[, bounds, …]) Tree.
visualization.HdrBoxplot(data[, variance, …]) High Density Region boxplot.
visualization.doe(sample[, plabels, …]) Plot the space of parameters 2d-by-2d.
visualization.response_surface(bounds[, …]) Response surface visualization in 2d (image), 3d (movie) or 4d (movies).
visualization.sobol(sobols[, conf, plabels, …]) Plot total Sobol’ indices.
visualization.corr_cov(data, sample, xdata) Correlation and covariance matrices.
visualization.pdf(data[, xdata, xlabel, …]) Plot PDF in 1D or 2D.
visualization.kernel_smoothing(data[, optimize]) Create gaussian kernel.
visualization.reshow(fig) Create a dummy figure and use its manager to display fig.

Visualization module

class batman.visualization.Kiviat3D(sample, data, bounds=None, plabels=None, range_cbar=None)[source]

3D version of the Kiviat plot.

Each realization is stacked on top of each other. The axis represent the parameters used to perform the realization.

__init__(sample, data, bounds=None, plabels=None, range_cbar=None)[source]

Prepare params for Kiviat plot.

Parameters:
  • sample (array_like) – Sample of parameters of Shape (n_samples, n_params).
  • data (array_like) – Sample of realization which corresponds to the sample of parameters sample (n_samples, n_features).
  • bounds (array_like) – Boundaries to scale the colors shape ([min, n_features], [max, n_features]).
  • plabels (list(str)) – Names of each parameters (n_features).
  • range_cbar (array_like) – Minimum and maximum values for output function (2 values).
f_hops(frame_rate=400, fname='kiviat-HOPs.mp4', flabel='F', ticks_nbr=10, fill=True)[source]

Plot HOPs 3D kiviat.

Each frame consists in a 3D Kiviat with an additional outcome highlighted.

Parameters:
  • frame_rate (int) – Time between two outcomes (in milliseconds).
  • fname (str) – Export movie to filename.
  • flabel (str) – Name of the output function to be plotted next to the colorbar.
  • fill (bool) – Whether to fill the surface.
  • ticks_nbr (int) – Number of ticks in the colorbar.
static mesh_connectivity(n_params)[source]

Compute connectivity for Kiviat.

Using the n_points and n_params, it creates the connectivity required by VTK’s pixel elements:

       4
3 *----*----* 5
  |    |    |
0 *----*----* 2
       1

This will output:

4 0 1 3 4
4 1 2 4 5
Parameters:
  • n_points (int) – Number of points.
  • n_params (int) – Number of features.
Returns:

Connectivity.

Return type:

array_like of shape (n_cells, 5)

static mesh_vtk_ascii(data, connectivity, fname='mesh_kiviat.vtk')[source]

Write mesh file in VTK ascii format.

Format is as following (example with 3 cells):

# vtk DataFile Version 2.0
Kiviat 3D
ASCII

DATASET UNSTRUCTURED_GRID

POINTS 6 float
-0.40  0.73 0.00
-0.00 -0.03 0.00
 0.50  0.00 0.00
-0.40  0.85 0.04
-0.00 -0.12 0.04
 0.50  0.00 0.04


CELLS 3 15
4 0 1 3 4
4 1 2 4 5
4 2 0 5 3

CELL_TYPES 3
8
8
8

POINT_DATA 6
SCALARS value double
LOOKUP_TABLE default
17.770e+0
17.770e+0
17.770e+0
17.774e+0
17.774e+0
17.774e+0
Parameters:
  • coordinates (array_like) – Sample coordinates of shape (n_samples, n_features).
  • data (array_like) – function evaluations of shape (n_samples, n_features).
plot(fname=None, flabel='F', ticks_nbr=10, fill=True)[source]

Plot 3D kiviat.

Along with the matplotlib visualization, a VTK mesh is created.

Parameters:
  • fname (str) – Whether to export to filename or display the figures.
  • flabel (str) – Name of the output function to be plotted next to the colorbar.
  • ticks_nbr (int) – Number of ticks in the colorbar.
  • fill (bool) – Whether to fill the surface.
Returns:

figure.

Return type:

Matplotlib figure instance, Matplotlib AxesSubplot instances.

class batman.visualization.Tree(sample, data, bounds=None, plabels=None, range_cbar=None)[source]

Tree.

Extend principle of batman.visualization.Kiviat3D but for 2D parameter space. Sample are represented by segments and an azimutal component encode the value from batman.visualization.HdrBoxplot.

Subclass batman.visualization.Kiviat3D by overwriting batman.visualization.Kiviat3D._axis() and batman.visualization.Kiviat3D.plane().

__init__(sample, data, bounds=None, plabels=None, range_cbar=None)[source]

Prepare params for Tree plot.

Parameters:
  • sample (array_like) – Sample of parameters of Shape (n_samples, n_params).
  • data (array_like) – Sample of realization which corresponds to the sample of parameters sample (n_samples, n_features).
  • bounds (array_like) – Boundaries to scale the colors shape ([min, n_features], [max, n_features]).
  • plabels (list(str)) – Names of each parameters (n_features).
  • range_cbar (array_like) – Minimum and maximum values for output function (2 values).
class batman.visualization.HdrBoxplot(data, variance=0.8, alpha=None, threshold=0.95, outliers_method='kde', optimize=False)[source]

High Density Region boxplot.

From a given dataset, it computes the HDR-boxplot. Results are accessibles directly through class attributes:

  • median : median curve,
  • outliers : outliers regarding a given threshold,
  • hdr_90 : 90% quantile band,
  • extra_quantiles : other quantile bands,
  • hdr_50 : 50% quantile band.

The following methods are for convenience:

Example:
>> hdr = HdrBoxplot(data)
>> hdr.plot()
>> hdr.f_hops(generate=10)
>> hdr.sound()
__init__(data, variance=0.8, alpha=None, threshold=0.95, outliers_method='kde', optimize=False)[source]

Compute HDR Boxplot on data.

  1. Compute a 2D kernel smoothing with a Gaussian kernel,
  2. Compute contour lines for quantiles 90, 50 and alpha,
  3. Compute mediane curve along with quantiles regions and outlier curves.
Parameters:
  • data (array_like) – dataset (n_samples, n_features).
  • variance (float) – percentage of total variance to conserve.
  • alpha (array_like) – extra quantile values (n_alpha).
  • threshold (float) – threshold for outliers.
  • outliers_method (str) – detection method [‘kde’, ‘forest’].
  • optimize (bool) – bandwidth global optimization or grid search.
  • n_contours (int) – discretization to compute contour.
band_quantiles(band)[source]

Find extreme curves for a quantile band.

From the band of quantiles, the associated PDF extrema values are computed. If min_alpha is not provided (single quantile value), max_pdf is set to 1E6 in order not to constrain the problem on high values.

An optimization is performed per component in order to find the min and max curves. This is done by comparing the PDF value of a given curve with the band PDF.

Parameters:band (array_like) – alpha values [max_alpha, min_alpha] ex: [0.9, 0.5].
Returns:[max_quantile, min_quantile] (2, n_features).
Return type:list(array_like)
f_hops(frame_rate=400, fname='f-HOPs.mp4', samples=None, x_common=None, labels=None, xlabel='t', flabel='F', offset=0.05)[source]

Functional Hypothetical Outcome Plots.

Each frame consists in a HDR boxplot and an additional outcome. If it is an outlier, it is rendered as red dashed line.

If samples is None it will use the dataset, if an int>0 it will samples n new samples ; and if array_like, shape (n_samples, n_features) it will use this.

Parameters:
  • frame_rate (int) – time between two outcomes (in milliseconds).
  • fname (str) – export movie to filename.
  • int, list samples (False,) – Data selector.
  • x_common (array_like) – abscissa.
  • labels (list(str)) – labels for each curve.
  • xlabel (str) – label for x axis.
  • flabel (str) – label for y axis.
  • offset (float) – Margin around the extreme values of the plot.
find_outliers(data, samples, method='kde', threshold=0.95)[source]

Detect outliers.

The Isolation forrest method requires additional computations to find the centroide. This operation is only performed once and stored in self.detector. Thus calling, several times the method will not cause any overhead.

Parameters:
  • data (array_like) – data from which to extract outliers (n_samples, n_features).
  • samples (array_like) – samples values to examine (n_samples, n_features/n_components).
  • method (str) – detection method [‘kde’, ‘forest’].
  • threshold (float) – detection sensitivity.
Returns:

Outliers.

Return type:

array_like (n_outliers, n_features)

logger = <logging.Logger object>
plot(samples=None, fname=None, x_common=None, labels=None, xlabel='t', flabel='F')[source]

Functional plot and n-variate space.

If self.n_components is 2, an additional contour plot is done. If samples is None, the dataset is used for all plots ; otherwize the given sample is used.

Parameters:
  • array_like – samples to plot (n_samples, n_features).
  • fname (str) – wether to export to filename or display the figures.
  • x_common (array_like) – abscissa (1, n_features).
  • labels (list(str)) – labels for each curve.
  • xlabel (str) – label for x axis.
  • flabel (str) – label for y axis.
Returns:

figures and all axis.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

sample(samples)[source]

Sample new curves from KDE.

If samples is an int>0, n new curves are randomly sampled taking into account the joined PDF ; and if array_like, shape (n_samples, n_components) curves are sampled from reduce coordinates of the n-variate space.

Parameters:array_like samples (int,) – Data selector.
Returns:new curves.
Return type:array_like (n_samples, n_features)
sound(frame_rate=400, tone_range=None, amplitude=1000.0, distance=True, samples=False, fname='song-fHOPs.wav')[source]

Make sound from curves.

Each curve is converted into a sum of tones. This sum is played during a given time before another serie starts.

If samples is False it will use the dataset, if an int>0 it will samples n new samples ; and if array_like, shape (n_samples, n_features) it will use this.

Parameters:
  • frame_rate (int) – time between two outcomes (in milliseconds).
  • tone_range (list(int)) – range of frequencies of a tone (in hertz).
  • amplitude (float) – amplitude of the signal.
  • distance (bool) – use distance from median for tone generation.
  • int, list samples (False,) – Data selector.
  • fname (str) – export sound to filename.
batman.visualization.kernel_smoothing(data, optimize=False)[source]

Create gaussian kernel.

The optimization option could lead to longer computation of the PDF.

Parameters:
  • data (array_like) – output sample to draw a PDF from (n_samples, n_features).
  • optimize (bool) – use global optimization of grid search.
Returns:

gaussian kernel.

Return type:

sklearn.neighbors.KernelDensity.

batman.visualization.pdf(data, xdata=None, xlabel=None, flabel=None, moments=False, ticks_nbr=10, range_cbar=None, fname=None)[source]

Plot PDF in 1D or 2D.

Parameters:
  • data (nd_array/dict) –

    array of shape (n_samples, n_features) or a dictionary with the following:

  • xdata (array_like) – 1D discretization of the function (n_features,).
  • xlabel (str) – label of the discretization parameter.
  • flabel (str) – name of the quantity of interest.
  • moments (bool) – whether to plot moments along with PDF if dim > 1.
  • ticks_nbr (int) – number of color isolines for response surfaces.
  • range_cbar (array_like) – Minimum and maximum values for output function (2 values).
  • fname (str) – whether to export to filename or display the figures.
Returns:

figure.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

batman.visualization.sobol(sobols, conf=None, plabels=None, xdata=None, xlabel='x', fname=None)[source]

Plot total Sobol’ indices.

If len(sobols)>2 map indices are also plotted along with aggregated indices.

Parameters:
  • sobols (array_like) – [first (n_params), total (n_params), first (xdata, n_params), total (xdata, n_params)].
  • conf (float/array_like) – relative error around indices. If float, same error is applied for all parameters. Otherwise shape ([min, n_features], [max, n_features]).
  • plabels (list(str)) – parameters’ names.
  • xdata (array_like) – 1D discretization of the function (n_features,).
  • xlabel (str) – label of the discretization parameter.
  • fname (str) – wether to export to filename or display the figures.
Returns:

figure.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

batman.visualization.corr_cov(data, sample, xdata, xlabel='x', plabels=None, interpolation=None, fname=None)[source]

Correlation and covariance matrices.

Compute the covariance regarding YY and XY as well as the correlation regarding YY.

Parameters:
  • data (array_like) – function evaluations (n_samples, n_features).
  • sample (array_like) – sample (n_samples, n_featrues).
  • xdata (array_like) – 1D discretization of the function (n_features,).
  • xlabel (str) – label of the discretization parameter.
  • plabels (list(str)) – parameters’ labels.
  • interpolation (str) – If None, does not interpolate correlation and covariance matrices (YY). Otherwize use Matplotlib methods from imshow such as [‘bilinear’, ‘lanczos’, ‘spline16’, ‘hermite’, …].
  • fname (str) – wether to export to filename or display the figures.
Returns:

figure.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

batman.visualization.reshow(fig)[source]

Create a dummy figure and use its manager to display fig.

Parameters:fig – Matplotlib figure instance
batman.visualization.save_show(fname, figures)[source]

Either show or save the figure[s].

If fname is None the figure will show.

Parameters:
  • fname (str) – wether to export to filename or display the figures.
  • figure instance) figures (list(Matplotlib) – Figures to handle.
batman.visualization.response_surface(bounds, sample=None, data=None, fun=None, doe=None, resampling=0, xdata=None, axis_disc=None, flabel='F', plabels=None, feat_order=None, ticks_nbr=10, range_cbar=None, contours=None, fname=None)[source]

Response surface visualization in 2d (image), 3d (movie) or 4d (movies).

You have to set either (i) sample with data or (ii) fun depending on your data. If (i), the data are interpolated on a mesh in order to be plotted as a surface. Otherwize, fun is directly used to generate correct data.

The DoE can also be plotted by setting doe along with resampling.

Parameters:
  • bounds (array_like) – sample boundaries ([min, n_features], [max, n_features]).
  • sample (array_like) – sample (n_samples, n_features).
  • data (array_like) – function evaluations(n_samples, [n_features]).
  • fun (callable) – function to plot the response from.
  • doe (array_like) – design of experiment (n_samples, n_features).
  • resampling (int) – number of resampling points.
  • xdata (array_like) – 1D discretization of the function (n_features,).
  • axis_disc (array_like) – discretisation of the sample on each axis (n_features).
  • flabel (str) – name of the quantity of interest.
  • plabels (list(str)) – parameters’ labels.
  • feat_order (array_like) – order of features for multi-dimensional plot (n_features).
  • ticks_nbr (int) – number of color isolines for response surfaces.
  • range_cbar (array_like) – min and max values for colorbar range (2).
  • contours (array_like) – isocontour values to plot on response surface.
  • fname (str) – wether to export to filename or display the figures.
Returns:

figure.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

batman.visualization.doe(sample, plabels=None, resampling=0, multifidelity=False, fname=None)[source]

Plot the space of parameters 2d-by-2d.

A n-variate plot is constructed with all couple of variables. The distribution on each variable is shown on the diagonal.

Parameters:
  • sample (array_like) – sample (n_samples, n_featrues).
  • plabels (list(str)) – parameters’ names.
  • resampling (int) – number of resampling points.
  • multifidelity (bool) – whether or not the model is a multifidelity.
  • fname (str) – whether to export to filename or display the figures.
Returns:

figure.

Return type:

Matplotlib figure instances, Matplotlib AxesSubplot instances.

batman.pod: Proper Orthogonal Decomposition

pod.Pod(corners, nsample, tolerance, dim_max) POD class.

Pod module

class batman.pod.Pod(corners, nsample, tolerance, dim_max, nrefine=0)[source]

POD class.

VS()[source]

Compute V*S matrix product.

S is diagonal and stored as vector thus (V*S).T = SV.T

__init__(corners, nsample, tolerance, dim_max, nrefine=0)[source]

Initialize POD components.

The decomposition of the snapshot matrix is stored as attributes:

  • U: Singular vectors matrix, array_like (n_features, n_snapshots), after filtering array_like(n_features, n_modes),
  • S: Singular values matrix, array_like (n_modes, n_snapshots), only the diagonal is stored, of length (n_modes),
  • V: array_like(n_snapshots, n_snapshots), after filtering (n_snapshots, n_modes).
Parameters:
  • corners (array_like) – hypercube ([min, n_features], [max, n_features]).
  • sample (int/array_like) – number of sample or list of sample of shape (n_samples, n_features).
  • nrefine (int) – number of point to use for refinement.
  • tolerance (float) – basis modes filtering criteria.
  • dim_max (int) – number of basis modes to keep.
decompose(snapshots)[source]

Create a POD from a set of snapshots.

Parameters:snapshots (lst(array)) – snapshots matrix.
directories = {'mean_snapshot': 'Mean.txt', 'modes': 'Mods.npz'}
static downgrade(Vt)[source]

Downgrade by removing the kth row of V.

S^{-k} &= U\Sigma R^T Q^T\\
S^{-k} &= UU'\Sigma'V'^TQ^T \\
S^{-k} &= U^{-k}\Sigma'V^{(-k)^T}

Parameters:
  • S – Singular vector, array_like (n_modes,).
  • Vt – V.T without one row, array_like (n_snapshots - 1, n_modes).
Returns:

U’, S’, V(-k).T

Return type:

array_like.

estimate_quality()[source]

Quality estimator.

Estimate the quality of the POD by the leave-one-out method.

Returns:Q2.
Return type:float.
static filtering(S, V, tolerance, dim_max)[source]

Remove lowest modes in U, S and V.

Parameters:
  • U (array_like) – (nb of data, nb of snapshots).
  • S (array_like) – (nb of modes).
  • V (array_like) – (nb of snapshots, nb of snapshots).
  • tolerance (float) – basis modes filtering criteria.
  • dim_max (int) – number of basis modes to keep.
Returns:

U (nb of data, nb of modes).

Return type:

array_like.

Returns:

S (nb of modes).

Return type:

array_like.

Returns:

V (nb of snapshots, nb of modes).

Return type:

array_like.

logger = <logging.Logger object>
pod_file_name = 'pod.npz'
points_file_name = 'points.dat'
read(path)[source]

Read a POD from disk.

Parameters:path (str) – path to a directory.
update(snapshot)[source]

Update POD with a new snapshot.

Parameters:snapshot – new snapshot to update the POD with.
write(path)[source]

Save a POD to disk.

Parameters:path (str) – path to a directory.

batman.functions: Functions

functions.data Data module *******
functions.analytical.SixHumpCamel() SixHumpCamel class [Molga2005].
functions.analytical.Branin() Branin class [Forrester2008].
functions.analytical.Michalewicz([d, m]) Michalewicz class [Molga2005].
functions.analytical.Ishigami([a, b]) Ishigami class [Ishigami1990].
functions.analytical.Rastrigin([d]) Rastrigin class [Molga2005].
functions.analytical.G_Function([d, a]) G_Function class [Saltelli2000].
functions.analytical.Forrester([fidelity]) Forrester class [Forrester2007].
functions.analytical.ChemicalSpill([s, tstep]) Environmental Model class [Bliznyuk2008].
functions.analytical.Channel_Flow([dx, …]) Channel Flow class.
functions.analytical.Manning([width, slope, …]) Manning equation for rectangular channel class.
functions.telemac_mascaret.Mascaret() Mascaret class.
functions.telemac_mascaret.MascaretApi(…) Mascaret API.
functions.utils.multi_eval(fun) Detect space or unique point.
functions.utils.output_to_sequence(fun) Convert float output to list.

Data module

class batman.functions.data.Data(data, desc, sample=None, plabels=None, flabels=None)[source]

Wrap datasets into a Mapping container.

Store a dataset allong with some informations about it. data corresponds to model’s output and sample to the corresponding inputs.

Structured array are created for both data and sample. This allows to access values using either normal indexing or attribute indexing by use of labels’ features.

If required, toarray() convert both data and sample into regular arrays.

__init__(data, desc, sample=None, plabels=None, flabels=None)[source]

Dataset container.

Both data and sample are required to be 2D arrays. Thus with one feature, shape must be (n_samples, 1).

Parameters:
  • data (array_like) – (n_features, n_samples).
  • desc (str) – dataset description.
  • sample (array_like) – sampling used to create the data (n_features, n_samples).
  • plabels (list(str)) – parameters’ labels (n_features,).
  • flabel (list(str)) – name of the quantities of interest (n_features,).
logger = <logging.Logger object>
toarray()[source]

Convert the structured array to regular arrays.

This will prevent the hability to access sample and data using attributes from respective labels.

batman.functions.data.el_nino()[source]

El Nino dataset.

batman.functions.data.tahiti()[source]

Tahiti dataset.

batman.functions.data.mascaret()[source]

Mascaret dataset.

batman.functions.data.marthe()[source]

MARTHE dataset.

Analytical module

Defines analytical Uncertainty Quantification oriented functions for test and model evaluation purpose.

It implements the following classes:

In each case, Sobol’ indices are declared.

References

[Molga2005](1, 2, 3, 4, 5, 6) Molga, M., & Smutnicki, C. Test functions for optimization needs (2005).
[Dixon1978]Dixon, L. C. W., & Szego, G. P. (1978). The global optimization problem: an introduction. Towards global optimization, 2, 1-15.
[Ishigami1990](1, 2) Ishigami, T., & Homma, T. (1990, December): An importance quantification technique in uncertainty analysis for computer models. In Uncertainty Modeling and Analysis, 1990. Proceedings., First International Symposium on (pp. 398-403). IEEE.
[Saltelli2000](1, 2) Saltelli, A., Chan, K., & Scott, E. M. (Eds.). (2000). Sensitivity analysis (Vol. 134). New York: Wiley.
[Forrester2007](1, 2) Forrester, Sobester. (2007). Multi-Fidelity Optimization via Surrogate Modelling. In Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[Forrester2008](1, 2) Forrester, A., Sobester, A., & Keane, A. (2008). Engineering design via surrogate modelling: a practical guide. Wiley.
[Bliznyuk2008](1, 2) Bliznyuk, N., Ruppert, D., Shoemaker, C., Regis, R., Wild, S., & Mugunthan, P. (2008). Bayesian calibration and uncertainty analysis for computationally expensive models using optimization and radial basis function approximation. Journal of Computational and Graphical Statistics, 17(2).
[Surjanovic2017]Surjanovic, S. & Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. Retrieved September 11, 2017, from http://www.sfu.ca/~ssurjano.
class batman.functions.analytical.Branin[source]

Branin class [Forrester2008].

f(x) = \left( x_2 - \frac{5.1}{4\pi^2}x_1^2 + \frac{5}{\pi}x_1 - 6
\right)^2 + 10 \left[ \left( 1 - \frac{1}{8\pi} \right) \cos(x_1)
+ 1 \right] + 5x_1.

The function has two local minima and one global minimum. It is a modified version of the original Branin function that seek to be representative of engineering functions.

f(x^*) = -15,310076, x^* = (-\pi, 12.275), x_1 \in [-5, 10], x_2 \in [0, 15]

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__()[source]

Set up attributes.

logger = <logging.Logger object>
class batman.functions.analytical.Channel_Flow(dx=8000.0, length=40000.0, width=500.0)[source]

Channel Flow class.

\frac{dh}{ds}=\mathcal{F}(h)=I\frac{1-(h/h_n)^{-10/3}}{1-(h/h_c)^{-3}}\\
h_c=\left(\frac{q^2}{g}\right)^{1/3}, h_n=\left(\frac{q^2}{IK_s^2}\right)^{3/10}

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(dx=8000.0, length=40000.0, width=500.0)[source]

Initialize the geometrical configuration.

Parameters:
  • dx (float) – discretization
  • length (float) – canal length
  • width (float) – canal width
logger = <logging.Logger object>
class batman.functions.analytical.ChemicalSpill(s=None, tstep=0.3)[source]

Environmental Model class [Bliznyuk2008].

Model a pollutant spill caused by a chemical accident. C(x) being the concentration of the pollutant at the space-time vector (s, t), with 0 < s < 3 and t > 0.

A mass M of pollutant is spilled at each of two locations, denoted by the space-time vectors (0, 0) and (L, \tau). Each element of the response is a scaled concentration of the pollutant at the space-time vector.

f(X) = \sqrt{4\pi}C(X), x \in [[7, 13], [0.02, 0.12], [0.01, 3],
[30.1, 30.295]]\\
C(X) = \frac{M}{\sqrt{4\pi D_{t}}}\exp \left(\frac{-s^2}{4D_t}\right) +
\frac{M}{\sqrt{4\pi D_{t}(t - \tau)}} \exp \left(-\frac{(s-L)^2}{4D(t -
\tau)}\right) I (\tau < t)

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(s=None, tstep=0.3)[source]

Definition of the time-space domain.

Parameters:
  • s (list) – locations
  • tstep (float) – time-step
logger = <logging.Logger object>
class batman.functions.analytical.Forrester(fidelity='e')[source]

Forrester class [Forrester2007].

F_{e}(x) = (6x-2)^2\sin(12x-4), \\
F_{c}(x) = AF_e(x)+B(x-0.5)+C,

were x\in{0,1} and A=0.5, B=10, C=-5.

This set of two functions are used to represents a high an a low fidelity.

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(fidelity='e')[source]

Forrester-function definition.

e stands for expansive and c for cheap.

Parameters:fidelity (str) – select the fidelity ['e'|'f']
logger = <logging.Logger object>
class batman.functions.analytical.G_Function(d=4, a=None)[source]

G_Function class [Saltelli2000].

F = \Pi_{i=1}^d \frac{\lvert 4x_i - 2\rvert + a_i}{1 + a_i}

Depending on the coefficient a_i, their is an impact on the impact on the output. The more the coefficient is for a parameter, the less the parameter is important.

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(d=4, a=None)[source]

G-function definition.

Parameters:
  • d (int) – input dimension
  • a (np.array) – (1, d)
logger = <logging.Logger object>
class batman.functions.analytical.Ishigami(a=7.0, b=0.1)[source]

Ishigami class [Ishigami1990].

F = \sin(x_1)+7\sin(x_2)^2+0.1x_3^4\sin(x_1), x\in [-\pi, \pi]^3

It exhibits strong nonlinearity and nonmonotonicity. Depending on a and b, emphasize the non-linearities. It also has a dependence on X3 due to second order interactions (F13).

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(a=7.0, b=0.1)[source]

Set up Ishigami.

Parameters:a, b (float) – Ishigami parameters
logger = <logging.Logger object>
class batman.functions.analytical.Manning(width=100.0, slope=0.0005, inflow=1000, d=1)[source]

Manning equation for rectangular channel class.

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(width=100.0, slope=0.0005, inflow=1000, d=1)[source]

Initialize the geometrical configuration.

Parameters:
  • width (float) – canal width
  • slope (float) – canal slope
  • inflow (float) – canal inflow (optional)
  • dim (int) – 1 (Ks) or 2 (Ks,Q)
logger = <logging.Logger object>
class batman.functions.analytical.Michalewicz(d=2, m=10)[source]

Michalewicz class [Molga2005].

It is a multimodal d-dimensional function which has d! local minima

f(x)=-\sum_{i=1}^d \sin(x_i)\sin^{2m}\left(\frac{ix_i^2}{\pi}\right),

where m defines the steepness of the valleys and ridges.

It is to difficult to search a global minimum when m reaches large value. Therefore, it is recommended to have m < 10.

f(x^*) = -1.8013, x^* = (2.20, 1.57), x \in [0, \pi]^d

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(d=2, m=10)[source]

Set up dimension.

logger = <logging.Logger object>
class batman.functions.analytical.Rastrigin(d=2)[source]

Rastrigin class [Molga2005].

It is a multimodal d-dimensional function which has regularly distributed local minima.

f(x)=10d+\sum_{i=1}^d [x_i^2-10\cos(2\pi x_i)]

f(x^*) = 0, x^* = (0, ..., 0), x \in [-5.12, 5.12]^d

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(d=2)[source]

Set up dimension.

logger = <logging.Logger object>
class batman.functions.analytical.Rosenbrock(d=2)[source]

Rosenbrock class [Dixon1978].

f(x)=\sum_{i=1}^{d-1}[100(x_{i+1}-x_i^2)^2+(x_i-1)^2]

The function is unimodal, and the global minimum lies in a narrow, parabolic valley.

f(x^*) = 0, x^* = (1, ..., 1), x \in [-2.048, 2.048]^d

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__(d=2)[source]

Set up dimension.

logger = <logging.Logger object>
class batman.functions.analytical.SixHumpCamel[source]

SixHumpCamel class [Molga2005].

\left(4-2.1x_1^2+\frac{x_1^4}{3}\right)x_1^2+x_1x_2+
(-4+4x_2^2)x_2^2

The function has six local minima, two of which are global.

f(x^*) = -1.0316, x^* = (0.0898, -0.7126), (-0.0898,0.7126),
x_1 \in [-3, 3], x_2 \in [-2, 2]

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__()[source]

Set up attributes.

logger = <logging.Logger object>

Mascaret module

class batman.functions.telemac_mascaret.Mascaret[source]

Mascaret class.

__call__(x_n, *args, **kwargs)

Get evaluation from space or point.

If the function is a Kriging instance, get and returns the variance.

Returns:function evaluation(s) [sigma(s)]
Return type:np.array([n_eval], n_feature)
__init__()[source]

Read the database and define the channel.

logger = <logging.Logger object>
class batman.functions.telemac_mascaret.MascaretApi(settings, user_settings)[source]

Mascaret API.

__call__(x=None, Qtime=None, saveall=False)[source]

Run the application using user_settings.

Parameters:
  • x (list) – inputs [Ks, Q]
  • saveall (bool) – Change the default name of the Results file
__init__(settings, user_settings)[source]

Constructor.

  1. Loads the Mascaret library with MascaretApi.load_mascaret(),
  2. Creates an instance of Mascaret with MascaretApi.create_model(),
  3. Reads model files from “settings” with MascaretApi.file_model(),
  4. Gets model size with MascaretApi.model_size(),
  5. Gets the simulation times with MascaretApi.simu_times(),
  6. Reads and applies user defined parameters from user_settings,
  7. Initializes the model with MascaretApi.init_model().
allstate()[source]

Get state at all simulation points in user_settings['misc']['all_outstate'].

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Boolean index:Flag to return all state
Returns:State at each simulation point
Return type:list of floats
allstateQ()[source]

Get state Q at all simulation points in user_settings['misc']['all_outstate'].

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Boolean index:Flag to return all state
Returns:State at each simulation point
Return type:list of floats
bc_qt

Get boundary conditions Qt.

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Returns:boundary conditions for Qt
Return type:list(float)
create_model()[source]

Create an instance of Mascaret.

Uses Mascaret Api C_CREATE_MASCARET().

cross_section

Get CrossSection everywhere.

Uses Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Requires `ZBOT:{idx, value}` in user.json`.

curv_abs()[source]

Get abscurv over entire domain.

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Returns:curv_abs list
Return type:list of float
empty_opt()[source]

Hack to be able to re-launch Mascaret.

error
error_message()[source]

Error message wrapper.

Returns:Error message
Return type:str
file_model(settings)[source]

Read model files from settings which is a JSON file.

(.xcas, .geo, .lig, .loi, .dtd) Uses Mascaret Api C_IMPORT_MODELE_MASCARET().

Parameters:settings (str) – path of JSON settings file
friction_minor

Get minor friction coefficient at index ks_idx.

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Returns:Minor friction coefficient
Return type:float
ind_zone_frot

Get indices of the beginning and end of all the friction zones.

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_INT_MASCARET().

Returns:Index of beginning and end
Return type:list(int)
info_all_bc()[source]

Return numbers and names of all boundary conditions.

Use Mascaret Api C_GET_NOM_CONDITION_LIMITE_MASCARET().

Returns:
Return type:float, list(float), list(float)
init_model()[source]

Initialize the model from constant values.

init_cst in user_settings along with Q_cst and Z_cst values or from file.lig in settings. Uses Mascaret Api C_INIT_LIGNE_MASCARET() or C_INIT_ETAT_MASCARET().

load_mascaret(libmascaret)[source]

Load Mascaret library.

Parameters:libmascaret (str) – path to the library
logger = <logging.Logger object>
model_size

Get model size (number of nodes).

Uses C_GET_TAILLE_VAR_MASCARET().

Returns:Size of the model
Return type:int
run_mascaret(x=None, Qtime=None, flag=None, saveall=False)[source]

Run Mascaret simulation.

Use Mascaret Api C_CALCUL_MASCARET(). If x if not None, Ks and Q are modified before running. When flag is None, both parameters are modified. Thus x needs to be set accordingly. If the flag is set to Ks, then only this parameter is considered. If y if not None, the BC are provided in .csv :param list x: inputs [Ks, Q] :param str flag: None, ‘Ks’ or ‘Q’ :param bool saveall: Change the default name of the Results file :return: water level at index_outstate or all state if all_outstate is true :rtype: double

simu_times

Get the simulation times from .xcas in settings.

Uses Mascaret Api C_GET_DOUBLE_MASCARET().

Returns:time step, initial time and final time
Return type:tuple(float)
state(index)[source]

Get state at given index in user_settings['misc']['index_outstate'].

Use Mascaret Api C_GET_TAILLE_VAR_MASCARET() and C_GET_DOUBLE_MASCARET().

Parameters:index (float) – Index to get the state from
Returns:State at a given index
Return type:float
user_defined(user_settings=None)[source]

Read user parameters from user_settings` and apply values.

Look for Q_BC (Q_BC={'idx','value'}) and Ks (Ks={'zone','idx','value', 'ind_zone'}) (the Ks for 1 point or 1 zone). Use zone_friction_minor(), friction_minor() and bc_qt().

Parameters:user_settings (str) – Path of the JSON settings file
zone_friction_minor

Get minor friction coefficient at zone ind_zone.

Use ind_zone_frot and friction_minor.

Returns:Friction coefficient at zone
Return type:list(float)

batman.tasks: Tasks

tasks.SnapshotIO(parameter_names, feature_names) Manage data I/Os and data generation for Snapshot.
tasks.Snapshot(point, data) A snapshot container.
tasks.ProviderFile(executor, io_manager, …) A Provider class that build snapshots whose data come from a file.
tasks.ProviderPlugin(executor, io_manager, …) A Provider that build snapshost whose data come from a python function.

Tasks module

class batman.tasks.Snapshot(point, data)[source]

A snapshot container.

Its very basic interface is just used for binding a dataset to a sample point.

__init__(point, data)[source]

Initialize a snapshot.

Parameters:
data

Snapshot data.

point

Snapshot point coordinates.

class batman.tasks.SnapshotIO(parameter_names, feature_names, point_filename='sample-space.json', data_filename='sample-data.json', point_format='json', data_format='json')[source]

Manage data I/Os and data generation for Snapshot.

__init__(parameter_names, feature_names, point_filename='sample-space.json', data_filename='sample-data.json', point_format='json', data_format='json')[source]

Initialize the IO manager for snapshots.

Parameters:
  • parameter_names (list) – List of parameter labels.
  • feature_names (list) – List of feature labels.
  • point_filename (str) – Name of the snapshot point file.
  • data_filename (str) – Name of the snapshot data file.
  • point_format (str) – Name of the point file format.
  • data_format (str) – Name of the data file format.
read_data(dirpath)[source]

Read sample features from the data file.

Parameters:path (str) – Path to snapshot directory.
Return type:numpy.ndarray
read_point(dirpath)[source]

Read sample parameters from the point file.

Parameters:dirpath (str) – Path to snapshot directory.
Return type:numpy.ndarray
write_data(dirpath, data)[source]

Write sample features to the data file.

Parameters:
  • path (str) – Path to snapshot directory.
  • data (numpy.ndarray) – Sample features to write.
write_point(dirpath, point)[source]

Write sample parameters to the point file.

Parameters:
  • dirpath (str) – Path to snapshot directory.
  • point (array-like) – Sample parameters to write.
class batman.tasks.ProviderFile(executor, io_manager, job_settings)[source]

A Provider class that build snapshots whose data come from a file.

__init__(executor, io_manager, job_settings)[source]

Initialize the provider.

Parameters:
  • executor (concurrent.futures.Executor) – a task pool executor.
  • io_manager (SnapshotIO) – defines snapshots as files.
  • job_settings (dict) –

    specify a job for building snapshot data with the following:

    • command (str): command to execute.
    • context_directory (str): directory from wich to get jobs resources.
    • coupling_directory (str): default is batman-coupling.
    • clean (bool): default is False.
known_points

Dictionnary binding known snapshots with their location.

logger = <logging.Logger object>
snapshot(point, snapshot_dir)[source]

Snapshot bound to an asynchronous job that read data from a file.

Parameters:
  • point (batman.space.Point.) – the point in parameter space at which to provide a snapshot.
  • snapshot_dir (str) – the directory containing the snapshot data files.
Returns:

A snapshot.

Return type:

Snapshot.

class batman.tasks.ProviderPlugin(executor, io_manager, plug_settings)[source]

A Provider that build snapshost whose data come from a python function.

__init__(executor, io_manager, plug_settings)[source]

Initialize the provider.

Parameters:
  • executor (concurrent.futures.Executor) – a task pool executor.
  • io_manager (SnapshotIO) – defines snapshots as files.
  • plug_settings (dict) –

    specify how to load a plugin with the following:

    • module (str): python module to load.
    • function (str): function in module to execute when a snapshot is required.
known_points

Dictionnary binding known snapshots with their location.

logger = <logging.Logger object>
snapshot(point, *ignored)[source]

Snapshot bound to an asynchronous job.

It execute the provided plugin function.

Parameters:point (batman.space.Point) – the point in parameter space at which to provide a snapshot.
Returns:A Snapshot.
Return type:Snapshot

batman.misc: Misc

misc.NestedPool([processes, initializer, …]) NestedPool class.
misc.ProgressBar(total) Print progress bar in console.
misc.optimization(bounds[, discrete]) Perform a discret or a continuous/discrete optimization.
misc.import_config(path_config, path_schema) Import a configuration file.
misc.check_yes_no(prompt, default) Ask user for delete confirmation.
misc.ask_path(prompt, default, root) Ask user for a folder path.
misc.abs_path(value) Get absolute path.
misc.clean_path(path) Return an absolute and normalized path.

Misc module

batman.misc.clean_path(path)[source]

Return an absolute and normalized path.

batman.misc.check_yes_no(prompt, default)[source]

Ask user for delete confirmation.

Parameters:
  • prompt (str) – yes-no question
  • default (str) – default value
Returns:

true if yes

Return type:

boolean

batman.misc.abs_path(value)[source]

Get absolute path.

batman.misc.import_config(path_config, path_schema)[source]

Import a configuration file.

class batman.misc.ProgressBar(total)[source]

Print progress bar in console.

__call__()[source]

Update bar.

__init__(total)[source]

Create a bar.

Parameters:total (int) – number of iterations
compute_eta()[source]

Compute ETA.

Compare current time with init_time.

Returns:eta, vel
Return type:str
show_progress(eta=None, vel=None)[source]

Print bar and ETA if relevant.

Parameters:
  • eta (str) – ETA in H:M:S
  • vel (str) – iteration/second
class batman.misc.NestedPool(processes=None, initializer=None, initargs=(), maxtasksperchild=None)[source]

NestedPool class.

Inherit from pathos.multiprocessing.Pool. Enable nested process pool.

Process

alias of NoDaemonProcess

batman.misc.ask_path(prompt, default, root)[source]

Ask user for a folder path.

Parameters:
  • prompt (str) – Ask.
  • default (str) – default value.
  • root (str) – root path.
Returns:

path if folder exists.

Return type:

str.

batman.misc.optimization(bounds, discrete=None)[source]

Perform a discret or a continuous/discrete optimization.

If a variable is discrete, the decorator allows to find the optimum by doing an optimization per discrete value and then returns the optimum.

Parameters:
  • bounds (array_like) – bounds for optimization ([min, max], n_features).
  • discrete (int) – index of the discrete variable.
batman.misc.cpu_system()[source]

Number of CPU of system.

batman.input_output: Input Output

input_output.available_formats() Returns the list of available format names.
input_output.formater(format_name) Returns a Formater

IO module

Provides Formater objects to deal with I/Os.

Every formaters have the same interface, exposing the two methods read and write.

Example:Using json formater
>> from input_output import formater
>> varnames = ['x1', 'x2', 'x3']
>> data = [[1, 2, 3], [87, 74, 42]]
>> fmt = formater('json')
>> fmt.write('file.json', data, varnames)
{'x1': [1, 87], 'x2': [2, 74], 'x3': [3, 42]}
>> # can load a subset of variables, in a different order (unavailable for format 'npy')
>> fmt.read('file.json', ['x2', 'x1'])
array([[2, 1], [74, 87]])
batman.input_output.formater(format_name)[source]

Returns a Formater

batman.input_output.available_formats()[source]

Returns the list of available format names.