NEC orchestrating a brighter world
NEC Laboratories Europe

Home
Multi-task Learning for Massive Numbers of Regression Tasks

How to make machines learn massive number of regression tasks efficiently and accurately

Xiao He

Francesco Alesiani

Ammar Shaker

Abstract

Many real-world large-scale regression problems can be formulated as Multi-task Learning (MTL) problems with a massive number of tasks, as in retail and transportation domains. However, existing MTL methods still fail to offer both the generalization performance and the scalability for such problems. Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge. Here, we propose a novel algorithm, named Convex Clustering Multi-Task regression Learning (CCMTL), which integrates with convex clustering on the k-nearest neighbor graph of the prediction models. Further, CCMTL efficiently solves the underlying convex problem with a newly proposed optimization method. CCMTL is accurate, efficient to train, and empirically scales linearly in the number of tasks. On both synthetic and real-world datasets, the proposed CCMTL outperforms seven state-of-the-art (SoA) multi-task learning methods in terms of prediction accuracy as well as computational efficiency. On a real-world retail dataset with 23,812 tasks, CCMTL requires only around 30 seconds to train on a single thread, while the SoA methods need up to hours or even days.

Figure 1: CCMTL key intuitions to achieve massive scaling performances and accuracy

Introduction

Multi-task learning (MTL) is a branch of machine learning that aims at exploiting the correlation among tasks. To achieve this, the learning of different tasks is performed jointly. It has been shown that learning task relationships can transfer knowledge from information-rich tasks to information-poor tasks [11] so that overall generalization error can be reduced. With this characteristic, MTL has been successfully applied in use cases ranging from Transportation [3] to Biomedicine [9].

In real-world regression applications, the number of tasks can be tremendous. For instance, in the retail market, shops’ owners would like to forecast the sales amount of all the products based on the historical sales information and external factors. For a typical shop, there are thousands of products, where each product can be modeled as a separate regression task. In addition, if we consider large retail chains, where the number of shops in a given region is in the order of hundreds, the total number of learning tasks easily grows up to hundreds of thousands. A similar scenario can also be found in other applications, e.g. demand prediction in transportation, where each task is one public transport station demand in a city at a certain time per line. Here, at least tens of thousands of tasks are expected.

In all these scenarios, MTL approaches that exploit relationships among tasks are appropriate. Unfortunately most of existing SoA multi-task learning methods cannot be applied, because either they are not able to cope with task heterogeneity or are too computationally expensive, scaling super-linearly in the number of tasks.

To tackle these issues, this paper introduces a novel algorithm, named Convex Clustering Multi-Task regression Learning (CCMTL). It integrates the objective of convex clustering[5] into the multi-task learning framework. CCMTL is efficient with linear runtime in the number of tasks, yet provides accurate prediction. The detailed contributions of this paper are fourfold:

  1. 1.

    Model: A new model for multi-task regression that integrates with convex clustering on the k-nearest neighbor graph of the prediction models;

  2. 2.

    Optimization Method: A new optimization method for the proposed problem that is proved to converge to the global optimum;

  3. 3.

    Accurate Prediction: Accurate predictions on both synthetic and real-world datasets in retail and transportation domains which outperform eight SoA multi-task learning methods;

  4. 4.

    Efficient and Scalable Implementation: An efficient and linearly scalable implementation of the algorithm. On a real-world retail dataset with 23,812 tasks, the algorithm requires only around 30 seconds to terminate, whereas SoA methods typically need up to hours or even days.

For proofs and additional details we refer to the original paper [4]111https://www.aaai.org/ojs/index.php/AAAI/article/view/4262/4140.

The Proposed Method

Let us consider a dataset with T regression tasks {Xt,Yt}={(xti,yti):i{1,,Nt}} for each task t{1,,T}. XtP×Nt consists of Nt samples and P features, while Yt1×Nt is the target for task t. There are totally N samples, where N=t=1TNt. We consider linear models in this paper. Let W=(W1T,,WTT)T×P, where WtP×1 represent the weight vector for task t.

Model

Base on the intuition of convex clustering, CCMTL solves the following problem

minW12t=1T||WtTXt-yt||22+λ2i,jG||Wi-Wj||2(1)

where λ is a regularization parameter and G is the k-nearest neighbor graph on the prediction models learned independently for each task.

Optimization

Fig.1 shows graphically the key intuitions of CCMTL for solving problem with massive number of tasks and high accuracy.

First we introduce an auxiliary variable L={li,j} for each connection between node i and j in graph G, li,j0, and we form the following new problem:

minW,L12t=1T||WtTXt-yt||22+λ2i,jG(li,j||Wi-Wj||22+14li,j-1)(2)
Theorem 1.

The optimal solution W* of Problem (1) and Problem (2) are the same if

li,j=12||Wi-Wj||2(3)

Intuitively, Problem (2) learns the weights li,j for the squared 2 norm regularizer ||Wi-Wj||22 as in Graph regularized Multi-task Learning (SRMTL) [12]. With noisy edges, squared 2 norm forces uncorrelated tasks to be close, while 2 norm is more robust which is confirmed for MTL in the experimental section.

In order to solve Problem (2), let us define XTP×N as a block diagonal matrix X=diag(X1,,XT)

define Y1×N as a row vector Y=[y1,,yT],

and define VTP×1 as a column vector V=[W1T,,WTT]T, eiT is an indicator vector with the ith element set to 1 and others 0 and IP is an identity matrix of size P. Setting the derivative of Problem (2) with respect to V to zero, the optimal solution of V can be obtained by solving the following linear system:

(A+B)V=C(4)

where A=(λi,jGli,j(ei-ej)(ei-ej)T)IP, B=XXT, C=XYT, and W is derived by reshaping V.

Would Alg.1 converge to the optimal solution of the orginal problem? The following theorem answer the question.

Theorem 2.

Alternately updating Eq. (3) and Eq. (4) on L and W converges to a global optimum of Problem (1).

Efficient and Scalable Implementation

Input :{Xt,Yt} for t={1,2,,T}, λ
Output :W={W1,,WT}
1fort1toTdo
2 Solve Wt by Linear Regression on {Xt,Yt}
3 end for
4 Construct k-nearest neighbor graph G on W;
5whilenot convergedo
6 Update L using Eq. (3);
7 Update W by solving Eq. (4) using CMG [6];
8
9 end while
10returnW;
Algorithm 1CCMTL

CCMTL is summarized in Algorithm 1. Firstly, it initializes the weight vector Wt by performing Linear Regression on each task t separately. Then, it constructs the k-nearest neighbor graph G on W based on the Euclidean distance. Finally, the optimization problem is solved by iteratively updating L and W until convergence.

Solving the linear system in Eq. (4) involves inverting a matrix of size PT×PT, where P is the number of features and T is the number of tasks. Direct inversion will lead to cubic computational complexity, which will not scale to a large number of tasks. However, there are certain properties of A and B in Eq. (4) that can be used to derive efficient implementation.

Theorem 3.

A and B in Eq. (4) are both Symmetric and Positive Semi-definite and A is a Laplacian matrix.

Based on Theorem 3, Eq. (4) can be solved efficiently.

The runtime of CCMTL consists of threefold: 1) Initialization of W, 2) k-nearest neighbor graph construction, and 3) optimization of Problem (2). Clearly, initialization of W by linear regression is efficient and scales linearly on the number of tasks T. k-nearest neighbor graph construction naively scales quadratically to T, but approximate solver has complexity of TlogT. Solving the linear system in Eq. (4) exhibit linear complexity in T.

Figure 2: Scalability experiments on synthetic datasets with increasing number of tasks (log scale on task number)

Experiments

Comparison Methods and Datasets

We compare our method CCMTL with several SoA methods. As baselines, we compare with Single-task learning (STL), which learns a single model by pooling together the data from all the tasks and Independent task learning (ITL), which learns each task independently. We further compare to multi-task feature learning method: Joint Feature Learning (L21) [1] and low-rank methods: Trace-norm Regularized Learning (Trace) [8] and Robust Multi-task Learning (RMTL) [2]. We also compare to the other five clustering approaches: CMTL[7], FuseMTL[13], SRMTL[12] and two recently proposed models BiFactorMTL and TriFactorMTL[10].

Table 1: Summary statistic of the datasets
NameSamplesFeaturesNum Tasks
Sales340625811
Ta-Feng2619320523812
Alighting3394551926
Boarding3394551926

We employ both synthetic and real-world datasets. Table 1 shows their statistics.

Results and Discussion

Table 2: Results (RMSE and runtime) on Retail datasets.
SalesTa-Feng
RMSETime(s)RMSETime(s)
STL2.861 (0.02)0.10.791 (0.01)0.2
ITL3.115 (0.02)0.10.818 (0.01)0.4
L213.301 (0.01)11.80.863 (0.01)831.2
Trace3.285 (0.21)10.40.863 (0.01)582.3
RMTL3.111 (0.01)3.40.833 (0.01)181.5
CMTL3.088 (0.01)43.4->24h
FuseMTL2.898 (0.01)4.30.764 (0.01)8483.3
SRMTL2.854 (0.02)10.3->24h
BiFactor2.882 (0.01)55.7->24h
TriFactor2.857 (0.04)499.1->24h
CCMTL2.793 (0.01)1.80.767 (0.01)35.3

Table 2 depicts the results on two retail datasets: Sales and Ta-Feng; it also depicts the time required (in seconds) for the training using the best found parametrization for each method. Here 50% of samples are used for training. The best runtime for MTL methods is shown in boldface. Tasks in these two datasets are, again, rather homogeneous, therefore, the baseline STL has a competitive performance and outperforms many MTL methods. STL outperforms ITL, L21, Trace222 The hyperparameters searching range for L21 and Trace are shifted to [100,1010] for Ta-Feng dataset to get reasonable results., RMTL, CMTL, FuseMTL, SRMTL, BiFactor, and TriFactor on the Sales dataset, and outperforms ITL, L21, Trace and RMTL on the Ta-Feng data333 We set a timeout at 24h. CMTL, SRMTL, BiFactor, and TriFactor did not return the result on this timeout for the Ta-Feng dataset.. CCMTL is the only method that performs better (also statistically better) than STL on both data sets; it also outperforms all MTL methods (with statistical significance) on both data sets, except for FuseMTL which performs slightly better than CCMTL only on the Ta-Feng data. CCMTL requires the smallest runtime in comparison with the competitor MTL algorithms. On Ta-Feng dataset, CCMTL requires only around 30 seconds, while the SoA methods need up to hours or even days.

Table 3: Results (RMSE and runtime) on Transportation datasets.
AlightingBoarding
RMSETime(s)RMSETime(s)
STL3.073 (0.02)0.13.236 (0.03)0.1
ITL2.894 (0.02)0.13.002 (0.03)0.1
L212.865 (0.04)14.62.983 (0.03)16.7
Trace2.835 (0.01)19.12.997 (0.05)17.5
RMTL2.985 (0.03)6.73.156 (0.04)7.1
CMTL2.970 (0.02)82.63.105 (0.03)91.8
FuseMTL3.080 (0.02)11.13.243 (0.03)11.3
SRMTL2.793 (0.02)12.32.926 (0.02)14.2
BiFactor3.010 (0.02)152.13.133 (0.03)99.7
TriFactor2.913 (0.02)282.33.014 (0.03)359.1
CCMTL2.795 (0.02)4.82.928 (0.03)4.1

Table 3 depicts the results on the Transportation datasets, using two different target attributes (alighting and boarding); again, the runtime is presented for the best-found parametrization, and the best runtime achieved by the MTL methods are shown in boldface.

Figure 2 shows the recorded runtime in seconds while presenting the number of tasks in the log-scale. As can be seen, CMTL, MTFactor, and TriFactor were not capable to process 40k tasks in less than 24 hours, therefore, they were stopped (the extrapolation for the minimum needed runtime can be seen as a dashed line). FuseMTL, SRMTL, L21, and Trace tend to show a super-linear growth of the needed runtime in the log-scale. Both CCMTL and RMTL show constant behavior in the number of tasks, where only around 500 and 800 seconds are needed for the dataset with 160k tasks respectively. CCMTL is the fastest method among all the MTL competitors.

Conclusion

In this paper, we study the multi-task learning problem with a massive number of tasks. We integrate convex clustering into the multi-task regression learning problem that captures tasks’ relationships on the k-NN graph of the prediction models. Further, we present an approach CCMTL that solves this problem efficiently and is guaranteed to converge to the global optimum. Extensive experiments show that CCMTL makes a more accurate prediction, runs faster than SoA competitors on both synthetic and real-world datasets, and scales linearly in the number of tasks. CCMTL will serve as a method for a wide range of large-scale regression applications where the number of tasks is tremendous. In the future, we will explore the use of the proposed method for online learning and high-dimensional problem.

Acknowledgments

The authors would like to thank Luca Franceschi for the discussion of the paper and the anonymous reviewers for their helpful comments and Prof. Yiannis Koutis for sharing the implementation of CMG.

References

  • [1]A. Argyriou, T. Evgeniou, and M. Pontil (2007)Multi-task feature learning. In NIPS, pp. 41–48. Cited by: Comparison Methods and Datasets.
  • [2]J. Chen, J. Zhou, and J. Ye (2011)Integrating low-rank and group-sparse structures for robust multi-task learning. In KDD, pp. 42–50. Cited by: Comparison Methods and Datasets.
  • [3]D. Deng, C. Shahabi, U. Demiryurek, and L. Zhu (2017)Situation aware multi-task learning for traffic prediction. In ICDM, pp. 81–90. Cited by: Introduction.
  • [4]X. He, F. Alesiani, and A. Shaker (2019)Efficient and Scalable Multi-task Regression on Massive Number of Tasks. In The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Note: Comment: Accepted at AAAI 2019External Links: LinkCited by: Introduction.
  • [5]T. Hocking, A. Joulin, F. Bach, and J. Vert (2011)Clusterpath an algorithm for clustering using convex fusion penalties. In ICML, Cited by: Introduction.
  • [6]K. Ioannis, G. Miller, and D. Tolliver (2011)Combinatorial preconditioners and multilevel solvers for problems in computer vision and image processing. Computer Vision and Image Understanding115 (12), pp. 1638 – 1646. External Links: ISSN 1077-3142Cited by: 1.
  • [7]L. Jacob, J. Vert, and F. R. Bach (2009)Clustered multi-task learning: a convex formulation. In NIPS, pp. 745–752. Cited by: Comparison Methods and Datasets.
  • [8]S. Ji and J. Ye (2009)An accelerated gradient method for trace norm minimization. In ICML, pp. 457–464. Cited by: Comparison Methods and Datasets.
  • [9]L. Li, X. He, and K. Borgwardt (2018)Multi-target drug repositioning by bipartite block-wise sparse multi-task learning. BMC systems biology12 (4), pp. 55. Cited by: Introduction.
  • [10]K. Murugesan, J. Carbonell, and Y. Yang (2017)Co-clustering for multitask learning. ICML. Cited by: Comparison Methods and Datasets.
  • [11]Y. Zhang and D. Yeung (2014)A regularization approach to learning task relationships in multitask learning. ACM Transactions on Knowledge Discovery from Data (TKDD)8 (3), pp. 12. Cited by: Introduction.
  • [12]J. Zhou, J. Chen, and J. Ye (2011)Malsar: multi-task learning via structural regularization. Arizona State University21. Cited by: Optimization, Comparison Methods and Datasets.
  • [13]J. Zhou, J. Liu, V. A. Narayan, and J. Ye (2012)Modeling disease progression via fused sparse group lasso. In KDD, pp. 1095–1103. Cited by: Comparison Methods and Datasets.
Top of this page