Email: guptavis "at" usc "dot" edu
Managing large-scale systems often involves simultaneously solving thousands of unrelated stochastic optimization problems, each with limited data. Intuition suggests one can decouple these unrelated problems and solve them separately without loss of generality. We propose a novel data-pooling algorithm called Shrunken-SAA that disproves this intuition. In particular, we prove that combining data across problems can outperform decoupling, even when there is no a priori structure linking the problems and data are drawn independently. Our approach does not require strong distributional assumptions and applies to constrained, possibly non-convex, non-smooth optimization problems such as vehicle-routing, economic lot-sizing or facility location. We compare and contrast our results to a similar phenomenon in statistics (Stein’s Phenomenon), highlighting unique features that arise in the optimization setting that are not present in estimation. We further prove that as the number of problems grows large, Shrunken-SAA learns if pooling can improve upon decoupling and the optimal amount to pool, even if the average amount of data per problem is fixed and bounded. Importantly, we highlight a simple intuition based on stability that highlights when and why data-pooling offers a benefit, elucidating this perhaps surprising phenomenon. This intuition further suggests that data-pooling offers the most benefits when there are many problems, each of which has a small amount of relevant data. Finally, we demonstrate the practical benefits of data-pooling using real data from a chain of retail drug stores in the context of inventory management.
Increased availability of high-quality customer information has fueled interest personalized pricing strategies, i.e., strategies that predict an individual customer’s valuation for a product and then offer a customized price tailored to that customer. While the appeal of personalized pricing is clear, it may also incur large costs in the form of market research, investment in information technology and analytics expertise, and branding risks. In light of these tradeoffs, in this work, we study the value of personalized pricing over simpler pricing strategies, such as charging a single price to all customers. In the first part of the work, we provide tight, closed-form upper bounds on the ratio of personalized pricing profits to single-pricing profits that depend on simple statistics of the valuation distribution. These bounds shed light on the types of markets for which personalized pricing has the most potential. In the second part of the work, we use these bounds to study the two key assumptions underlying personalized pricing: (i) the firm can charge a distinct price to each customer and (ii) the firm can perfectly predict customer valuations. Specifically, we bound the ratio of personalized pricing profits to the profits from a k-segmentation strategy where the firm is omniscient but can only charge customers one of k prices, and the ratio of personalized pricing profits to the profits of a feature-based pricing strategy, where the firm can charge a continuum of prices, but is no longer omniscient. These bounds help quantify the value of the operational capability of charging distinct prices and the value of additional predictive accuracy, respectively. Finally, we provide a general framework for computing an essentially tight bound on the ratio of personalized pricing profits to single-pricing profits in terms of the mean, support, and a generalized moment of the distribution
Optimization applications often depend upon a huge number of uncertain parameters. In many contexts, however, the amount of relevant data per parameter is small, and hence, we may only have imprecise estimates. We term this setting – where the number of uncertainties is large, but all estimates have low precision – the “small-data, large-scale regime.” We formalize a model for this new regime, focusing on optimization problems with uncertain linear objectives. We show that common data-driven methods may perform poorly in this new setting, despite their provably good performance in the traditional large-sample regime. Such methods include sample average approximation, “estimate-then-optimize” policies, data-driven robust optimization, and certain regularized policies.
We then propose a novel framework for selecting a data-driven policy from a given policy class. Like the aforementioned data-driven methods, our policy enjoys provably good performance in the large-sample regime. Unlike, these methods, however, we show that in the small-data, large-scale regime, our data-driven policy performs comparably to an oracle best-in-class policy, provided the policy class and estimates satisfy some mild conditions. We specialize and strengthen this result for linear optimization problems and two natural policy classes: the first inspired by the empirical Bayes literature in statistics and the second by the regularization literature in optimization and machine learning. For both classes, we show that the suboptimality gap between our proposed policy and the oracle policy decays exponentially fast in the number of uncertain parameters, even for a fixed amount of data. Thus, these policies retain the strong large-sample performance of traditional methods, and additionally enjoy provably strong performance in the small-data, large-scale regime. Numerical experiments confirm the significant benefits of our methods.
Frequently, policymakers seek to roll out an intervention previously proven effective in a research study, perhaps subject to resource constraints. However, since different subpopulations may respond differently to the same treatment, there is no a priori guarantee that the intervention will be as effective in the targeted population as it was in the study. How then should policymakers target individuals to maximize intervention effectiveness? We propose a novel robust optimization approach that leverages evidence typically available in a published study. Our approach is tractable – real-world instances are easily optimized in minutes with off-the-shelf software – and flexible enough to accommodate a variety of resource and fairness constraints. We compare our approach with current practice by proving performance guarantees for both approaches, which emphasize their structural differences. We also prove an intuitive interpretation of our model in terms of regularization, penalizing differences in the demographic distribution between targeted individuals and the study population. Although the precise penalty depends on the choice of uncertainty set, we show that for special cases we can recover classical penalties from the covariate matching literature on causal inference. Finally, using real data from a large teaching hospital, we compare our approach to common practice in the particular context of reducing emergency department utilization by Medicaid patients through case management. We find that our approach can offer significant benefits over common practice, particularly when the heterogeneity in patient response to the treatment is large.
We propose a Bayesian framework for assessing the relative strengths of data-driven ambiguity sets in distributionally robust optimization (DRO) when the underlying distribution is defined by a finite-dimensional parameter. The key idea is to measure the relative size between a candidate ambiguity set and a specific, asymptotically optimal set. This asymptotically optimal set is provably the smallest convex ambiguity set that satisfies a particular Bayesian robustness guarantee with respect to a given class of constraints as the amount of data grows large. In other words, it is a subset of any other convex set that satisfies the same guarantee. Using this framework, we prove that existing, popular ambiguity sets based on statistical confidence regions are significantly larger than the asymptotically optimal set with respect to constraints that are concave in the ambiguity– the ratio of their sizes scales with the square root of the dimension of the ambiguity. By contrast, we construct new ambiguity sets that are tractable, satisfy our Bayesian robustness guarantee with finite data and are only a small, constant factor larger than the asymptotically optimal set; we call these sets “Bayesian near-optimal.” We further prove that, asymptotically, solutions to DRO models with our Bayesian near-optimal sets enjoy frequentist robustness properties, despite their smaller size. Finally, our framework yields guidelines for practitioners for selecting between competing ambiguity set proposals in DRO. Computational evidence in portfolio allocation using real and simulated data confirms that our framework, although motivated by asymptotic analysis in a Bayesian setting, provides practical insight into the performance of various DRO models with finite data under frequentist assumptions.
The last decade has seen an explosion in the availability of data for operations research applications as part of the Big Data revolution. Motivated by this data rich paradigm, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using statistical hypothesis tests. The approach is flexible and widely applicable, and robust optimization problems built from our new sets are computationally tractable, both theoretically and practically. Furthermore, optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. We also propose concrete guidelines for practitioners and illustrate our approach with applications in portfolio management and queueing. Computational evidence confirms that our data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.
Sample average approximation (SAA) is a widely approach to data-driven decision-making under uncertainty. Under mild assumptions, SAA is both tractable and enjoys strong asymptotic performance guarantees. Similar guarantees, however, do not typically hold in finite samples. In this paper, we propose a modification of SAA, which we term Robust SAA, which retains SAA’s tractability and asymptotic properties and, additionally, enjoys strong finite-sample performance guarantees. The key to our method is linking SAA, distributionally robust optimization, and hypothesis testing of goodness-of-fit. Beyond Robust SAA, this connection provides a unified perspective enabling us to characterize the finite sample and asymptotic guarantees of various other data-driven procedures that are based upon distributionally robust optimization. We present examples from inventory management and portfolio allocation, and demonstrate numerically that our approach outperforms other data-driven approaches in these applications.
Dynamic resource allocation (DRA) problems are an important class of dynamic stochastic optimization problems that arise in a variety of important real-world applications. DRA problems are notoriously difficult to solve to optimality since they frequently combine stochastic elements with intractably large state and action spaces. Although the artificial intelligence and operations research communities have independently proposed two successful frameworks for solving dynamic stochastic optimization problems—Monte Carlo tree search (MCTS) and mathematical optimization (MO), respectively—the relative merits of these two approaches are not well understood. In this paper, we adapt both MCTS and MO to a problem inspired by tactical wildfire and management and undertake an extensive computational study comparing the two methods on large scale instances in terms of both the state and the action spaces. We show that both methods are able to greatly improve on a baseline, problem-specific heuristic. On smaller instances, the MCTS and MO approaches perform comparably, but the MO approach outperforms MCTS as the size of the problem increases for a fixed computational budget.
Equilibrium modeling is common in a variety of fields such as game theory and transportation science. The inputs for these models, however, are often difficult to estimate, while their outputs, i.e., the equilibria they are meant to describe, are often directly observable. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the parameters of these models from observed equilibria. We use this technique to estimate the utility functions of players in a game from their observed actions and to estimate the congestion function on a road network from traffic count data. A distinguishing feature of our approach is that it supports both parametric and nonparametric estimation by leveraging ideas from statistical learning (kernel methods and regularization operators). In computational experiments involving Nash and Wardrop equilibria in a nonparametric setting, we find that a) we effectively estimate the unknown demand or congestion function, respectively, and b) our proposed regularization technique substantially improves the out-of-sample performance of our estimators.
In the age of big data analytics, it is increasingly important for researchers and practitioners to be familiar with methods and software tools for analyzing large data sets, formulating and solving large-scale mathematical optimization models, and sharing solutions using interactive media. Unfortunately, advanced software tools are seldom included in curricula of graduate-level operations research (OR) programs. We describe a course consisting of eight three-hour modules intended to introduce Master’s and PhD students to advanced software tools for OR: Machine Learning in R, Data Wrangling, Visualization, Big Data, Algebraic Modeling with JuMP, High-Performance and Distributed Computing, Internet and Databases, and Advanced Mixed Integer Linear Programming (MILP) Techniques. For each module, we outline content, provide course materials, summarize student feedback, and share lessons learned from two iterations of the course. Student feedback was very positive, and all students reported that the course equipped them with software skills useful for their own research. We believe our course materials could serve as a template for the development of effective OR software tools courses and discuss how they could be adapted to other educational settings.
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a mean variance inverse optimization (MV-IO) portfolio and a robust mean variance inverse optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward trade-off than their BL counterparts and are more robust to incorrect investor views.