Optimizing a Polymer Compound for Strength and Density

Imagine you work at a custom materials solutions company that specializes in creating polymer compounds for various applications. A customer has requested a polymer formulation with high strength and a high biodegradability score. The customer is unsure of the tradeoff between the two properties but knows that the target application will require a strength of at least 70 MPa. As the customer is concerned about the toxicitiy and biodegradability of the polymer, they have limited you to a set of five thermoplastic monomers that can be used in the formulation.

You believe Bayesian optimization can help you in this task and decide to put together an optimization script using Honegumi to help solve this problem.

Taking note of available composition and process parameters you decide to restrict your design space to the following:

Parameter Name

Bounds

x1

Monomer A

[0, 1]

x2

Monomer B

[0, 1]

x3

Monomer C

[0, 1]

x4

Monomer D

[0, 1]

x5

Monomer E

[0, 1]

x6

Extrusion Rate

[0.01, 0.1]

x7

Temperatrue

[120, 200]

To help find a solution quickly, you dig up some data on these polymer systems in the literature and decide to use them to help improve the surrogate model. While none of these meet the customer requirement, you think they might at least help tell your model where NOT to look. The collected data is as follows:

x1

x2

x3

x4

x5

x6

x7

Strength

BioDeg

0.3

0.2

0.1

0.0

0.4

0.05

150

43.73

1.81

0.0

0.0

0.3

0.7

0.0

0.1

160

25.79

3.83

0.2

0.2

0.2

0.2

0.2

0.09

184

41.37

2.29

A dummy objective function that returns outputs for each property has been constructed in the code cell below. This functions aims to emulate the results of experimental trials under different inputs. Although we can easily find optimal values using the equations, we will pretend that the objective function is unknown and use a Bayesian optimization approach to find the optimal set of input parameters instead.

[6]:
import numpy as np

def polymer_properties(x1, x2, x3, x4, x5, x6, x7):
    """
    Calculates the strength and biodegradability properties of a polymer based
    on a set of given input parameters.

    Parameters:
    x1 (float): volume fraction of monomer 1. Range: [0.0, 1.0].
    x2 (float): volume fraction of monomer 2: [0.0, 1.0].
    x3 (float): volume fraction of monomer 3: [0.0, 1.0].
    x4 (float): volume fraction of monomer 4: [0.0, 1.0].
    x5 (float): volume fraction of monomer 5: [0.0, 1.0].
    x6 (float): the polymer extrusion rate. Range: [0.01, 0.1].
    x7 (float): the processsing temperature. Range: [120.0, 200.0].

    Returns:
    dict: calculated strength and biodegradability properties of polymer in form:
          {
              "strength": float,
              "biodegradability": float
          }
    """
    strength = float(
        np.exp(-(50*(x1-0.5)**2)) +
        np.exp(-(5*(x2-0.4)**2)) -
        0.8*x3 +
        np.exp(-(300*(x4-0.1)**2)) -
        0.3*x5**2 +
        np.exp(-(2000*(x6-0.025)**2)) +
        1/(1+np.exp(-(x7-137)/15))
    )

    biodegradability = float(
        -1/(1+np.exp(-(x1-0.1)/0.1)) + 1 +
        -1/(1+np.exp(-(x2-0.3)/0.1)) + 1 +
        x3**2 +
        x4 +
        1/(1+np.exp(-(x5-0.7)/0.075)) +
        10*x6 +
        -(x7/200)**2+1
    )

    return {"strength" : strength*25, "biodegradability" : biodegradability*5}

Applying Honegumi

We will now use the Honegumi website to generate a script that will help us optimize the polymer parameters. From the description, we observe that our problem is a multi objective optimization problem with a constraint on the fractional sum of monomer components and a custom threshold on the optimized strength. Additionally, we would like to include some historical data in our model training. To create an optimization script for this problem, we select the following options:

Selection.jpg

The Honegumi generated optimization script will provide a framework for our optimization campaign that we can modify to suit our specific problem needs. In the code sections below, we will make several modifications to this generated script to make it compatible with our problem.

Modifying the Code for Our Problem

We can modify this code to suit our problem with a few simple modifications. Wherever a modification has been made to the code, a comment starting with # CHANGE: has been added along with a brief description of the change.

[12]:
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties


import pandas as pd

obj1_name = "strength" # CHANGE: add name of first objective
obj2_name = "biodegradability" # CHANGE: add name of first objective


# CHANGE: remove the moo_branin dummy objective function, we will use the above function

# CHANGE: update the total quantity for the composition constraint
total = 1.0

# CHANGE: add the historical data that was pulled from the literature
X_train = pd.DataFrame(
    [
        {"x1": 0.3, "x2": 0.2, "x3": 0.1, "x4": 0.0, "x5": 0.4, "x6": 0.05, "x7": 150.0},
        {"x1": 0.0, "x2": 0.0, "x3": 0.3, "x4": 0.7, "x5": 0.0, "x6": 0.1, "x7": 160.0},
        {"x1": 0.2, "x2": 0.2, "x3": 0.2, "x4": 0.2, "x5": 0.2, "x6": 0.09, "x7": 184.0},
    ]
)

# CHANGE: calculate the y_train values using the polymer_properties function
y_train = [polymer_properties(**row[1]) for row in X_train.iterrows()]

# Define the number of training examples
n_train = len(X_train)

ax_client = AxClient(random_seed=12345) # CHANGE: add random seed for reproducibility

ax_client.create_experiment(
    parameters=[
        {"name": "x1", "type": "range", "bounds": [0.0, 1.0]}, # CHANGE: update parameter
        {"name": "x2", "type": "range", "bounds": [0.0, 1.0]}, # CHANGE: update parameter
        {"name": "x3", "type": "range", "bounds": [0.0, 1.0]}, # CHANGE: add new parameter
        {"name": "x4", "type": "range", "bounds": [0.0, 1.0]}, # CHANGE: add new parameter
        {"name": "x5", "type": "range", "bounds": [0.0, 1.0]}, # CHANGE: add new parameter
        {"name": "x6", "type": "range", "bounds": [0.01, 0.1]}, # CHANGE: add new parameter
        {"name": "x7", "type": "range", "bounds": [120.0, 200.0]}, # CHANGE: add new parameter
    ],
    objectives={
        obj1_name: ObjectiveProperties(minimize=False, threshold=70.0), # CHANGE: set minimize to False and change threshold
        obj2_name: ObjectiveProperties(minimize=False, threshold=0.0), # CHANGE: set minimize to False and change threshold
    },
    parameter_constraints=[
        f"x1 + x2 + x3 + x4 <= {total}", # CHANGE: update composition constraint
    ],
)

# Add existing data to the AxClient
for i in range(n_train):
    parameterization = X_train.iloc[i].to_dict()

    ax_client.attach_trial(parameterization)
    ax_client.complete_trial(trial_index=i, raw_data=y_train[i])


for _ in range(35): # CHANGE: increase number of trials

    parameterization, trial_index = ax_client.get_next_trial()

    # CHANGE: pull all added parameters from the parameterization
    x1 = parameterization["x1"]
    x2 = parameterization["x2"]
    x3 = parameterization["x3"]
    x4 = parameterization["x4"]
    x5 = total - (x1 + x2 + x3 + x4) # CHANGE: update composition constraint
    x6 = parameterization["x6"]
    x7 = parameterization["x7"]

    results = polymer_properties(x1, x2, x3, x4, x5, x6, x7) # CHANGE: switch to polymer function
    ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
[INFO 04-17 12:10:39] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[WARNING 04-17 12:10:39] ax.service.ax_client: Random seed set to 12345. Note that this setting only affects the Sobol quasi-random generator and BoTorch-powered Bayesian optimization models. For the latter models, setting random seed to the same number for two optimizations will make the generated trials similar, but not exactly the same, and over time the trials will diverge more.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x7. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 04-17 12:10:39] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.01, 0.1]), RangeParameter(name='x7', parameter_type=FLOAT, range=[120.0, 200.0])], parameter_constraints=[ParameterConstraint(1.0*x1 + 1.0*x2 + 1.0*x3 + 1.0*x4 <= 1.0)]).
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: Using Models.BOTORCH_MODULAR since there is at least one ordered parameter and there are no unordered categorical parameters.
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=7 num_trials=None use_batch_trials=False
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=14
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=14
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: `verbose`, `disable_progbar`, and `jit_compile` are not yet supported when using `choose_generation_strategy` with ModularBoTorchModel, dropping these arguments.
[INFO 04-17 12:10:39] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+BoTorch', steps=[Sobol for 14 trials, BoTorch for subsequent trials]). Iterations after 14 will take longer to generate due to model-fitting.
[INFO 04-17 12:10:39] ax.core.experiment: Attached custom parameterizations [{'x1': 0.3, 'x2': 0.2, 'x3': 0.1, 'x4': 0.0, 'x5': 0.4, 'x6': 0.05, 'x7': 150.0}] as trial 0.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 0 with data: {'strength': (46.660238, None), 'biodegradability': (9.078739, None)}.
[INFO 04-17 12:10:39] ax.core.experiment: Attached custom parameterizations [{'x1': 0.0, 'x2': 0.0, 'x3': 0.3, 'x4': 0.7, 'x5': 0.0, 'x6': 0.1, 'x7': 160.0}] as trial 1.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 1 with data: {'strength': (25.79598, None), 'biodegradability': (19.168606, None)}.
[INFO 04-17 12:10:39] ax.core.experiment: Attached custom parameterizations [{'x1': 0.2, 'x2': 0.2, 'x3': 0.2, 'x4': 0.2, 'x5': 0.2, 'x6': 0.09, 'x7': 184.0}] as trial 2.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 2 with data: {'strength': (41.652192, None), 'biodegradability': (11.474355, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.222494, 'x2': 0.114947, 'x3': 0.5139, 'x4': 0.020708, 'x5': 0.829686, 'x6': 0.021624, 'x7': 181.654276} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 3 with data: {'strength': (58.799907, None), 'biodegradability': (8.839131, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.154027, 'x2': 0.242956, 'x3': 0.386497, 'x4': 0.173289, 'x5': 0.617998, 'x6': 0.026884, 'x7': 162.506085} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 4 with data: {'strength': (65.371678, None), 'biodegradability': (9.692252, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.179133, 'x2': 0.005902, 'x3': 0.101222, 'x4': 0.461255, 'x5': 0.171468, 'x6': 0.092874, 'x7': 195.245879} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 5 with data: {'strength': (33.640468, None), 'biodegradability': (13.557441, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.088445, 'x2': 0.425975, 'x3': 0.134375, 'x4': 0.143051, 'x5': 0.807816, 'x6': 0.059075, 'x7': 186.147521} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 6 with data: {'strength': (62.787792, None), 'biodegradability': (8.18435, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.026292, 'x2': 0.082697, 'x3': 0.27327, 'x4': 0.60958, 'x5': 0.650338, 'x6': 0.02124, 'x7': 155.349769} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 7 with data: {'strength': (53.265418, None), 'biodegradability': (14.337877, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.412713, 'x2': 0.228674, 'x3': 0.047212, 'x4': 0.091807, 'x5': 0.307525, 'x6': 0.033846, 'x7': 150.470045} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 8 with data: {'strength': (101.00477, None), 'biodegradability': (7.90621, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.117707, 'x2': 0.10513, 'x3': 0.039212, 'x4': 0.253231, 'x5': 0.552579, 'x6': 0.072357, 'x7': 146.750515} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 9 with data: {'strength': (30.385234, None), 'biodegradability': (14.123691, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.353, 'x2': 0.132346, 'x3': 0.124854, 'x4': 0.13, 'x5': 0.906883, 'x6': 0.018858, 'x7': 181.506837} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 10 with data: {'strength': (89.00033, None), 'biodegradability': (7.147958, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.145964, 'x2': 0.46252, 'x3': 0.193453, 'x4': 0.071685, 'x5': 0.407677, 'x6': 0.083634, 'x7': 157.186764} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 11 with data: {'strength': (60.091989, None), 'biodegradability': (9.398967, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.043166, 'x2': 0.023531, 'x3': 0.564187, 'x4': 0.201326, 'x5': 0.449078, 'x6': 0.05358, 'x7': 150.02964} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 12 with data: {'strength': (24.454797, None), 'biodegradability': (15.363259, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.080444, 'x2': 0.145896, 'x3': 0.441666, 'x4': 0.103938, 'x5': 0.221277, 'x6': 0.047577, 'x7': 134.169413} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 13 with data: {'strength': (54.110101, None), 'biodegradability': (13.494714, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.008678, 'x2': 0.049031, 'x3': 0.155088, 'x4': 0.11705, 'x5': 0.87693, 'x6': 0.043737, 'x7': 176.119587} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 14 with data: {'strength': (65.61865, None), 'biodegradability': (14.216473, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.033407, 'x2': 0.200164, 'x3': 0.361986, 'x4': 0.280265, 'x5': 0.334687, 'x6': 0.056508, 'x7': 183.545813} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 15 with data: {'strength': (40.479783, None), 'biodegradability': (12.629707, None)}.
[INFO 04-17 12:10:39] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.30056, 'x2': 0.032063, 'x3': 0.286449, 'x4': 0.149094, 'x5': 0.191202, 'x6': 0.025519, 'x7': 148.145571} using model Sobol.
[INFO 04-17 12:10:39] ax.service.ax_client: Completed trial 16 with data: {'strength': (64.053814, None), 'biodegradability': (9.970079, None)}.
[INFO 04-17 12:10:42] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.623365, 'x2': 0.264021, 'x3': 0.0, 'x4': 0.112022, 'x5': 0.346625, 'x6': 0.02824, 'x7': 151.606778} using model BoTorch.
[INFO 04-17 12:10:42] ax.service.ax_client: Completed trial 17 with data: {'strength': (101.039555, None), 'biodegradability': (7.070961, None)}.
[INFO 04-17 12:10:44] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.036143, 'x2': 0.083187, 'x3': 0.0, 'x4': 0.123627, 'x5': 0.741901, 'x6': 0.024407, 'x7': 169.50347} using model BoTorch.
[INFO 04-17 12:10:44] ax.service.ax_client: Completed trial 18 with data: {'strength': (79.395803, None), 'biodegradability': (14.41337, None)}.
[INFO 04-17 12:10:47] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.185593, 'x2': 0.230212, 'x3': 0.0, 'x4': 0.120535, 'x5': 0.341583, 'x6': 0.027248, 'x7': 150.241196} using model BoTorch.
[INFO 04-17 12:10:47] ax.service.ax_client: Completed trial 19 with data: {'strength': (84.672987, None), 'biodegradability': (9.178258, None)}.
[INFO 04-17 12:10:50] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 0.030186, 'x2': 0.05624, 'x3': 0.0, 'x4': 0.145763, 'x5': 1.0, 'x6': 0.021301, 'x7': 180.761653} using model BoTorch.
[INFO 04-17 12:10:50] ax.service.ax_client: Completed trial 20 with data: {'strength': (70.805624, None), 'biodegradability': (14.205617, None)}.
[INFO 04-17 12:10:53] ax.service.ax_client: Generated new trial 21 with parameters {'x1': 0.483281, 'x2': 0.223379, 'x3': 0.003256, 'x4': 0.093533, 'x5': 0.438249, 'x6': 0.029267, 'x7': 159.783298} using model BoTorch.
[INFO 04-17 12:10:53] ax.service.ax_client: Completed trial 21 with data: {'strength': (114.991376, None), 'biodegradability': (7.265252, None)}.
[INFO 04-17 12:10:56] ax.service.ax_client: Generated new trial 22 with parameters {'x1': 0.171795, 'x2': 0.080973, 'x3': 0.0, 'x4': 0.113554, 'x5': 0.655177, 'x6': 0.027367, 'x7': 168.605868} using model BoTorch.
[INFO 04-17 12:10:56] ax.service.ax_client: Completed trial 22 with data: {'strength': (82.802476, None), 'biodegradability': (10.980162, None)}.
[INFO 04-17 12:10:59] ax.service.ax_client: Generated new trial 23 with parameters {'x1': 0.0, 'x2': 0.094322, 'x3': 0.0, 'x4': 0.0, 'x5': 0.837995, 'x6': 0.019695, 'x7': 151.13371} using model BoTorch.
[INFO 04-17 12:10:59] ax.service.ax_client: Completed trial 23 with data: {'strength': (52.382159, None), 'biodegradability': (15.915414, None)}.
[INFO 04-17 12:11:01] ax.service.ax_client: Generated new trial 24 with parameters {'x1': 0.0, 'x2': 0.15087, 'x3': 0.0, 'x4': 0.198818, 'x5': 0.63805, 'x6': 0.026114, 'x7': 181.20332} using model BoTorch.
[INFO 04-17 12:11:01] ax.service.ax_client: Completed trial 24 with data: {'strength': (65.184941, None), 'biodegradability': (12.632999, None)}.
[INFO 04-17 12:11:04] ax.service.ax_client: Generated new trial 25 with parameters {'x1': 0.036207, 'x2': 0.0, 'x3': 0.0, 'x4': 0.250386, 'x5': 0.908954, 'x6': 0.029536, 'x7': 163.175483} using model BoTorch.
[INFO 04-17 12:11:04] ax.service.ax_client: Completed trial 25 with data: {'strength': (52.719962, None), 'biodegradability': (15.157616, None)}.
[INFO 04-17 12:11:07] ax.service.ax_client: Generated new trial 26 with parameters {'x1': 0.076866, 'x2': 0.173221, 'x3': 0.0, 'x4': 0.11133, 'x5': 0.797933, 'x6': 0.013231, 'x7': 160.373806} using model BoTorch.
[INFO 04-17 12:11:07] ax.service.ax_client: Completed trial 26 with data: {'strength': (79.935322, None), 'biodegradability': (11.222913, None)}.
[INFO 04-17 12:11:11] ax.service.ax_client: Generated new trial 27 with parameters {'x1': 0.479459, 'x2': 0.346647, 'x3': 0.0, 'x4': 0.097935, 'x5': 0.484028, 'x6': 0.026186, 'x7': 167.117139} using model BoTorch.
[INFO 04-17 12:11:11] ax.service.ax_client: Completed trial 27 with data: {'strength': (121.019677, None), 'biodegradability': (5.346441, None)}.
[INFO 04-17 12:11:14] ax.service.ax_client: Generated new trial 28 with parameters {'x1': 0.580115, 'x2': 0.041994, 'x3': 0.0, 'x4': 0.096781, 'x5': 0.463182, 'x6': 0.025367, 'x7': 169.884797} using model BoTorch.
[INFO 04-17 12:11:14] ax.service.ax_client: Completed trial 28 with data: {'strength': (103.120131, None), 'biodegradability': (7.851948, None)}.
[INFO 04-17 12:11:17] ax.service.ax_client: Generated new trial 29 with parameters {'x1': 0.0, 'x2': 0.008422, 'x3': 0.0, 'x4': 0.114974, 'x5': 0.570566, 'x6': 0.01, 'x7': 172.607485} using model BoTorch.
[INFO 04-17 12:11:17] ax.service.ax_client: Completed trial 29 with data: {'strength': (68.035485, None), 'biodegradability': (15.315641, None)}.
[INFO 04-17 12:11:20] ax.service.ax_client: Generated new trial 30 with parameters {'x1': 0.0, 'x2': 0.049395, 'x3': 0.0, 'x4': 0.127239, 'x5': 0.817694, 'x6': 0.025151, 'x7': 152.918907} using model BoTorch.
[INFO 04-17 12:11:20] ax.service.ax_client: Completed trial 30 with data: {'strength': (72.020162, None), 'biodegradability': (16.439821, None)}.
[INFO 04-17 12:11:24] ax.service.ax_client: Generated new trial 31 with parameters {'x1': 0.257347, 'x2': 0.645363, 'x3': 0.0, 'x4': 0.09729, 'x5': 0.480215, 'x6': 0.022957, 'x7': 166.427894} using model BoTorch.
[INFO 04-17 12:11:24] ax.service.ax_client: Completed trial 31 with data: {'strength': (91.473496, None), 'biodegradability': (4.18437, None)}.
[INFO 04-17 12:11:27] ax.service.ax_client: Generated new trial 32 with parameters {'x1': 0.413376, 'x2': 0.072098, 'x3': 0.0, 'x4': 0.099215, 'x5': 0.457643, 'x6': 0.031045, 'x7': 154.361334} using model BoTorch.
[INFO 04-17 12:11:27] ax.service.ax_client: Completed trial 32 with data: {'strength': (97.744146, None), 'biodegradability': (8.924053, None)}.
[INFO 04-17 12:11:31] ax.service.ax_client: Generated new trial 33 with parameters {'x1': 0.50416, 'x2': 0.273746, 'x3': 0.0, 'x4': 0.094571, 'x5': 0.503533, 'x6': 0.022394, 'x7': 167.617223} using model BoTorch.
[INFO 04-17 12:11:31] ax.service.ax_client: Completed trial 33 with data: {'strength': (119.509999, None), 'biodegradability': (5.995635, None)}.
[INFO 04-17 12:11:36] ax.service.ax_client: Generated new trial 34 with parameters {'x1': 0.0, 'x2': 0.058864, 'x3': 0.0, 'x4': 0.121596, 'x5': 0.762225, 'x6': 0.02506, 'x7': 165.611183} using model BoTorch.
[INFO 04-17 12:11:36] ax.service.ax_client: Completed trial 34 with data: {'strength': (77.437982, None), 'biodegradability': (15.832145, None)}.
[INFO 04-17 12:11:40] ax.service.ax_client: Generated new trial 35 with parameters {'x1': 0.005073, 'x2': 0.182147, 'x3': 0.0, 'x4': 0.112601, 'x5': 0.727606, 'x6': 0.02698, 'x7': 167.409672} using model BoTorch.
[INFO 04-17 12:11:40] ax.service.ax_client: Completed trial 35 with data: {'strength': (86.774828, None), 'biodegradability': (13.339966, None)}.
[INFO 04-17 12:11:44] ax.service.ax_client: Generated new trial 36 with parameters {'x1': 0.472891, 'x2': 0.132407, 'x3': 0.0, 'x4': 0.095867, 'x5': 0.440817, 'x6': 0.032109, 'x7': 160.997823} using model BoTorch.
[INFO 04-17 12:11:44] ax.service.ax_client: Completed trial 36 with data: {'strength': (109.173001, None), 'biodegradability': (8.197528, None)}.
[INFO 04-17 12:11:48] ax.service.ax_client: Generated new trial 37 with parameters {'x1': 0.005355, 'x2': 0.259832, 'x3': 0.0, 'x4': 0.107173, 'x5': 0.701674, 'x6': 0.028271, 'x7': 168.553791} using model BoTorch.
[INFO 04-17 12:11:48] ax.service.ax_client: Completed trial 37 with data: {'strength': (91.075536, None), 'biodegradability': (11.375168, None)}.
/Users/andrewf/miniconda3/envs/ax_env/lib/python3.9/site-packages/ax/modelbridge/modelbridge_utils.py:878: UserWarning: FYI: The default behavior of `get_pareto_frontier_and_configs` when `transform_outcomes_and_configs` is not specified has changed. Previously, the default was `transform_outcomes_and_configs=True`; now this argument is deprecated and behavior is as if `transform_outcomes_and_configs=False`. You did not specify `transform_outcomes_and_configs`, so this warning requires no action.
  frontier_observations, f, obj_w, obj_t = get_pareto_frontier_and_configs(

Show the Pareto Optimal Parameters

After the optimization loop has completed, we can view the set of parameter combinations that are found to be Pareto optimal. This will help us understand the tradeoff between the two objectives of interest.

[13]:
p_op = ax_client.get_pareto_optimal_parameters()

# parse p_op values to get parameters and values
p_op_index = list(p_op.keys())
p_op_params = [p_op[i][0] for i in p_op_index]
p_op_values = [p_op[i][1][0] for i in p_op_index]

# organize the results into a dataframe
pareto_results = pd.DataFrame(p_op_params, columns=["x1", "x2", "x3", "x4", "x5", "x6", "x7"])
pareto_results["strength"] = [v["strength"] for v in p_op_values]
pareto_results["biodegradability"] = [v["biodegradability"] for v in p_op_values]
pareto_results.index = p_op_index
display(pareto_results.round(2))
[INFO 04-17 12:11:49] ax.modelbridge.torch: The observations are identical to the last set of observations used to fit the model. Skipping model fitting.
/Users/andrewf/miniconda3/envs/ax_env/lib/python3.9/site-packages/ax/modelbridge/modelbridge_utils.py:878: UserWarning: FYI: The default behavior of `get_pareto_frontier_and_configs` when `transform_outcomes_and_configs` is not specified has changed. Previously, the default was `transform_outcomes_and_configs=True`; now this argument is deprecated and behavior is as if `transform_outcomes_and_configs=False`. You did not specify `transform_outcomes_and_configs`, so this warning requires no action.
  frontier_observations, f, obj_w, obj_t = get_pareto_frontier_and_configs(
x1 x2 x3 x4 x5 x6 x7 strength biodegradability
21 0.48 0.22 0.0 0.09 0.44 0.03 159.78 114.99 7.26
36 0.47 0.13 0.0 0.10 0.44 0.03 161.00 109.16 8.20
33 0.50 0.27 0.0 0.09 0.50 0.02 167.62 119.51 6.00
27 0.48 0.35 0.0 0.10 0.48 0.03 167.12 121.00 5.35
32 0.41 0.07 0.0 0.10 0.46 0.03 154.36 97.75 8.92
37 0.01 0.26 0.0 0.11 0.70 0.03 168.55 91.05 11.38
35 0.01 0.18 0.0 0.11 0.73 0.03 167.41 86.79 13.33
18 0.04 0.08 0.0 0.12 0.74 0.02 169.50 79.40 14.41
34 0.00 0.06 0.0 0.12 0.76 0.03 165.61 77.41 15.84
30 0.00 0.05 0.0 0.13 0.82 0.03 152.92 72.03 16.44

Plot the Optimal Values Found During Optimization

We can visualize the set of pareto optimial soltuions relative to the entire dataset by plotting them.

We observe that our historical data was indeed of poor quality, but that our model was able to find many candidates with signficantly higher strength and biodegradability scores. Additionally, we can now see a clear tradeoff between the two properties.

[14]:
import matplotlib.pyplot as plt
plt.style.use("ggplot")

fig, ax = plt.subplots(figsize=(4, 4), dpi=100)

all_trials = ax_client.get_trials_data_frame()
ax.scatter(
    all_trials["strength"],
    all_trials["biodegradability"],
    color='#818180',
    facecolor='none',
    s=25,
    label='All Trials'
)
ax.scatter(pareto_results["strength"], pareto_results["biodegradability"], color='#0041FF', label='Pareto Optimal')
historical = np.array([[d['strength'][0], d['biodegradability'][0]] for d in y_train])
ax.scatter(historical[:,0], historical[:,1], color='#FF9A00', label='Historical data')
ax.axvline(70, ls=':', color='k')
ax.set_xlabel("Strength")
ax.set_ylabel("Biodegradability")
ax.legend(facecolor='w', fontsize=8, loc='center left', bbox_to_anchor=(1, 0.1))
plt.show()
[WARNING 04-17 12:11:49] ax.service.utils.report_utils: Column reason missing for all trials. Not appending column.
../_images/tutorials_mobo-tutorial_8_1.png