We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large variety of applications, including production planning, lot sizing, ship speed optimization, stratified sampling, support vector machines, portfolio management, and telecommunications.
We introduce a gradient-free divide-and-conquer algorithm, which uses monotonicity arguments to generate valid bounds from the recursive calls, and eliminate linking constraints based on the information from sub-problems. These principles are quite unusual: the algorithm is not based on greedy steps and scaling, or even flow propagation, as it is often the case for this family of problems. It also does not need strict convexity or differentiability, and improves upon the best known complexity for this problem, producing a solution to the integer version of the problem (or an epsilon-approximate solution to the continuous version) in linearithmic time as a function of the problem size.
Our experimental analyses confirm the practical performance of the method, which produces optimal solutions for problems with up to one million variables in a few seconds. Promising applications to the support vector ordinal regression problem, for machine learning, are also investigated.