Skip to content

model

MetricConfiguration

Bases: TypedDict

Configuration for a model evaluation metric.

Attributes:

Name Type Description
enabled bool

Whether the metric is enabled or disabled.

rope_lower_bound Optional[float]

Lower bound of the region of practical equivalence (ROPE) for the metric. This is required when the hypothesis is MODEL_PERFORMANCE_WITHIN_RANGE.

rope_upper_bound Optional[float]

Upper bound of the region of practical equivalence (ROPE) for the metric. This is required when the hypothesis is MODEL_PERFORMANCE_WITHIN_RANGE.

hdi_width Optional[float]

Required width of the highest density interval (HDI) for the metric before evaluating the hypothesis.

Model

add_evaluation_data classmethod

add_evaluation_data(model_id: str, data: pd.DataFrame) -> None

Add evaluation data to a model.

Parameters:

Name Type Description Default
model_id str

ID of the model.

required
data DataFrame

Data to be added.

required
Note

This method does not update existing data. It only adds new data. If you want to update existing data, use upsert_evaluation_data instead.

create classmethod

create(
    name: str,
    schema: ModelSchema,
    reference_data: pd.DataFrame,
    hypothesis: HypothesisType,
    classification_threshold: float,
    metrics_configuration: Dict[PerformanceMetric, MetricConfiguration],
    key_performance_metric: PerformanceMetric,
    evaluation_data: Optional[pd.DataFrame] = None,
) -> ModelDetails

Create a new model.

Parameters:

Name Type Description Default
name str

Name for the model.

required
schema ModelSchema

Schema of the model. Typically, created using Schema.from_df.

required
hypothesis HypothesisType

The type of hypothesis the model is trying to validate. This can be one of the following: - MODEL_PERFORMANCE_NO_WORSE_THAN_REFERENCE: The model's performance is not worse than the reference. - MODEL_PERFORMANCE_WITHIN_RANGE: The model's performance is within a specified range.

required
classification_threshold float

The threshold used to turn predicted probabilities into binary predictions.

required
reference_data DataFrame

Reference data to use for the model.

required
evaluation_data Optional[DataFrame]

Analysis data to use for the model. If the data contains targets, targets must always be provided together with analysis data.

None
metrics_configuration Dict[PerformanceMetric, MetricConfiguration]

Configuration for each metric to be used in the model.

required
key_performance_metric PerformanceMetric

Key performance metric for the model.

required

Returns:

Type Description
ModelDetails

Detailed about the model once it has been created.

delete classmethod

delete(model_id: str) -> None

Delete a model.

Parameters:

Name Type Description Default
model_id str

ID of the model to delete.

required

get classmethod

get(model_id: str) -> ModelDetails

Get details for a model.

Parameters:

Name Type Description Default
model_id str

ID of the model to get details for.

required

Returns:

Type Description
ModelDetails

Detailed information about the model.

get_evaluation_data_history classmethod

get_evaluation_data_history(model_id: str) -> List[DataSourceEvent]

Get evaluation data history for a model.

Parameters:

Name Type Description Default
model_id str

ID of the model.

required

Returns:

Type Description
List[DataSourceEvent]

List of events related to analysis data for the model.

get_reference_data_history classmethod

get_reference_data_history(model_id: str) -> List[DataSourceEvent]

Get reference data history for a model.

Parameters:

Name Type Description Default
model_id str

ID of the model.

required

Returns:

Type Description
List[DataSourceEvent]

List of events related to reference data for the model.

list classmethod

list(
    name: Optional[str] = None, problem_type: Optional[ProblemType] = None
) -> List[ModelSummary]

List defined models.

Parameters:

Name Type Description Default
name Optional[str]

Optional name filter.

None
problem_type Optional[ProblemType]

Optional problem type filter.

None

Returns:

Type Description
List[ModelSummary]

List of models that match the provided filter criteria.

upsert_evaluation_data classmethod

upsert_evaluation_data(model_id: str, data: pd.DataFrame) -> None

Add or update analysis data for a model.

Parameters:

Name Type Description Default
model_id str

ID of the model.

required
data DataFrame

Data to be added/updated.

required
Note

This method compares existing data with the new data to determine which rows to update and which to add. If you are certain you are only adding new data, it is recommended to use add_evaluation_data instead for better performance.

ModelDetails

Bases: ModelSummary

Detailed information about a model.

Attributes:

Name Type Description
latestRun Optional[RunSummary]

The currently active run or latest run performed for the model. This is None if no runs have been performed yet.

ModelSummary

Bases: TypedDict

Summary of a model.

Attributes:

Name Type Description
id str

Unique identifier of the model (generated by NannyML Cloud when a model is created).

name str

User-defined name of the model.

problemType ProblemType

Type of problem the model is trying to solve.

createdAt datetime

Timestamp when the model was created.