A Multitask Approach to Learn Molecular Properties
journal contributionposted on 22.07.2021, 03:13 by Zheng Tan, Yan Li, Weimei Shi, Shiqing Yang
The endeavors to pursue a robust multitask model to resolve intertask correlations have lasted for many years. A multitask deep neural network, as the most widely used multitask framework, however, experiences several issues such as inconsistent performance improvement over the independent model benchmark. The research aims to introduce an alternative framework by using the problem transformation methods. We build our multitask models essentially based on the stacking of a base regressor and classifier, where the multitarget predictions are realized from an additional training stage on the expanded molecular feature space. The model architecture is implemented on the QM9, Alchemy, and Tox21 datasets, by using a variety of baseline machine learning techniques. The resultant multitask performance shows 1 to 10% enhancement of forecasting precision, with the task prediction accuracy being consistently improved over the independent single-target models. The proposed method demonstrates a notable superiority in tackling the intertarget dependence and, moreover, a great potential to simulate a wide range of molecular properties under the transformation framework.
Read the peer-reviewed publication
feature spacemultitask modelsintertask correlationstask prediction accuracytraining stagemodel architecturesingle-target modelsproblem transformation methodsmultitask performancemultitask modeltransformation frameworkMultitask Approachperformance improvementalternative frameworkbase regressorTox 21 datasetsMolecular Propertiesbaseline machinemodel benchmarkintertarget dependenceforecasting precisionmultitask frameworkmultitarget predictionsQM