Accelerating Reaction Optimization using Multitask Learning

Accelerating Reaction Optimization using Multitask Learning

Kobi Felton, Daniel Wigh, Alexei Lapkin

Sustainable Reaction Engineering Group, Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, UK

Outline

Introduction

Several of the grand engineering challenges of the 21st century require fast solutions for translating an initial discovery to an optimized process. Bayesian optimization (BO) is an efficient method for iteratively optimizing reactions used in fine chemicals, often finding good results in fifty experiments.

In the Bayesian optimization (BO) paradigm, an algorithm suggests an experiment or batch of experiments to be executed in the laboratory. The results of those experiments are used to suggest experiments improve one or more objectives.
In the Bayesian optimization (BO) paradigm, an algorithm suggests an experiment or batch of experiments to be executed in the laboratory. The results of those experiments are used to suggest experiments improve one or more objectives.

However, fifty is still a large number of experiments for the primary use case of BO: early-stage process optimization where catalysts, bases and solvents need to be chosen.

In thinking about this challenge, we noticed that, in previous studies, each optimization run was started from scratch. In contrast to the way a chemist thinks, the algorithms did not incorporate information about similar chemical optimization studies.

image

Here, we apply a technique called multitask Bayesian optimization to leverage data from past experiments for accelerating new ones. We show cases where multitask BO cuts the number of experiments to reach similar results in half, and we discuss where it only achieves similar results to conventional BO.

We foresee our technique being particularly useful in companies sharing data in a pre-competitive manner since it can work on anonymized datasets. Additionally, it can work in cases where only limited historical data is available within one organization.

What is multitask Bayesian optimization?

To understand multitask BO, you need to understand multitask learning. Multitask learning trains one model to predict several related tasks. The idea is that the model transfers knowledge across tasks, giving better performance than if several models were trained separately.

Two single task models (samples of the model in solid lines) are used to predict the true functions separately (dotted lines).
Two single task models (samples of the model in solid lines) are used to predict the true functions separately (dotted lines).
Using a multitask model on the same function as the left figure gives significantly better predictions.
Using a multitask model on the same function as the left figure gives significantly better predictions.

Multitask BO uses a multitask model inside a BO framework, where a probabilistic model is trained and then sampled by an acquisition function for choosing subsequent experiments. In particular, we use multitask gaussian processes with an intrinsic coregionalization model (ICM) kernel.

Previously, multitask methods have been used to predict several related properties of a molecule. Examples of these properties are scores on similar biological assays, various fuel ratings, and reactivity in related reactions. However, multitask methods have not been applied directly in optimization of reaction conditions.

Simulating reactions with benchmarks

We create simulations of chemical reactions, called benchmarks, to avoid the expense and time of laboratory experiments when testing different BO algorithms. We use Summit, the first framework for reaction optimization, to create benchmarks based on machine learning models.

The first benchmark is based on data collected by Baumgartner et al. for a Suzuki cross coupling. Suzuki reactions are extremely common in fine chemicals industry, so they act as a good test-bench for our technology.

Scheme of the Suzuki cross coupling studied by Baumgartner et al. 145 experiments with this reaction are used for creating a benchmark.
Scheme of the Suzuki cross coupling studied by Baumgartner et al. 145 experiments with this reaction are used for creating a benchmark.

Our benchmark model is trained on 145 reaction conditions and validated on 15 conditions. There are a total of 8 different catalysts to choose from as well as the option to change temperature, reaction time and catalyst concentrations. A parity plot comparing predicted to measured yield is shown below.

Parity plot comparing predicted vs measured values of yield for a Suzuki cross coupling benchmark.
Parity plot comparing predicted vs measured values of yield for a Suzuki cross coupling benchmark.

We also include benchmarks based on more Suzuki cross couplings and C-N cross couplings.

When does Multitask BO shine and fail?

Below we compare single task and multitask BO on several different benchmarks. We find that in the first Suzuki cross coupling example presented above, multitask achieves twice the yield in the same number of experiments. Alternatively, the same yield is achieved in less than half the experiments.

image

However, there are cases where single-task and multitask perform similarly. We found that this particularly true when the reactants are quite different from each other (see Table 1 of this paper for the Suzuki R1-4). In general, the performance of multitask is within error of multitask BO.

image

Next Steps

We plan to execute more in-silico benchmarking studies to understand the effect of the number of tasks on multitask BO. Additionally, we are preparing to validate this method on a novel experimental case study.