Skip to content

experiment

Experiment

add_experiment_data classmethod

add_experiment_data(experiment_id: str, data: pd.DataFrame) -> None

Add evaluation data to an experiment.

Parameters:

Name Type Description Default
experiment_id str

ID of the experiment.

required
data DataFrame

Data to be added.

required
Note

This method does not update existing data. It only adds new data. If you want to update existing data, use upsert_data instead.

create classmethod

create(name: str, schema: ExperimentSchema, experiment_data: pd.DataFrame, experiment_type: ExperimentType, metrics_configuration: Dict[str, MetricConfiguration], key_experiment_metric: Optional[str] = None) -> ExperimentDetails

Create a new experiment.

Parameters:

Name Type Description Default
name str

Name for the experiment.

required
schema ExperimentSchema

Schema of the experiment. Typically, created using Schema.from_df.

required
experiment_data DataFrame

Data to be used for the experiment.

required
experiment_type ExperimentType

Type of the experiment.

required
metrics_configuration Dict[str, MetricConfiguration]

Configuration for each metric to be used in the experiment.

required
key_experiment_metric Optional[str]

Optional metric to be used as the key experiment metric.

None

Returns:

Type Description
ExperimentDetails

Detailed about the experiment once it has been created.

delete classmethod

delete(experiment_id: str) -> None

Delete an experiment.

Parameters:

Name Type Description Default
experiment_id str

ID of the experiment to delete.

required

get classmethod

get(experiment_id: str) -> ExperimentDetails

Get details for an experiment.

Parameters:

Name Type Description Default
experiment_id str

ID of the experiment to get details for.

required

Returns:

Type Description
ExperimentDetails

Detailed information about the experiment.

get_data_history classmethod

get_data_history(experiment_id: str) -> List[DataSourceEvent]

Get the data history for an experiment.

Parameters:

Name Type Description Default
experiment_id str

ID of the experiment.

required

Returns:

Type Description
List[DataSourceEvent]

List of events related to reference data for the experiment.

list classmethod

list(name: Optional[str] = None, experiment_type: Optional[ExperimentType] = None) -> List[ExperimentSummary]

List defined experiments.

Parameters:

Name Type Description Default
name Optional[str]

Optional name filter.

None
experiment_type Optional[ExperimentType]

Optional problem type filter.

None

Returns:

Type Description
List[ExperimentSummary]

List of models that match the provided filter criteria.

upsert_experiment_data classmethod

upsert_experiment_data(experiment_id: str, data: pd.DataFrame) -> None

Add or update analysis data for an experiment.

Parameters:

Name Type Description Default
experiment_id str

ID of the model.

required
data DataFrame

Data to be added/updated.

required
Note

This method compares existing data with the new data to determine which rows to update and which to add. If you are certain you are only adding new data, it is recommended to use add_experiment_data instead for better performance.

ExperimentDetails

Bases: ExperimentSummary

Detailed information about an experiment.

Attributes:

Name Type Description
latestRun Optional[RunSummary]

The currently active run or latest run performed for the experiment. This is None if no runs have been performed yet.

ExperimentSummary

Bases: TypedDict

Summary of an experiment.

Attributes:

Name Type Description
id str

Unique identifier of the experiment (generated by NannyML Cloud when an experiment is created).

name str

User-defined name of the experiment.

createdAt datetime

Timestamp when the experiment was created.

MetricConfiguration

Bases: TypedDict

Configuration for a metric in an experiment.

Attributes:

Name Type Description
enabled bool

Whether the metric is enabled or disabled.

rope_lower_bound float

Lower bound of the region of practical equivalence (ROPE) for the metric.

rope_upper_bound float

Upper bound of the region of practical equivalence (ROPE) for the metric.

hdi_width float

Required width of the highest density interval (HDI) for the metric before evaluating the hypothesis.