Distributionally Robust Optimization for Sequential Decision Making.
The distributionally robust Markov Decision Process approach has been proposed in the literature, where the goal is to seek a distributionally robust policy that achieves the maximal expected total reward under the most adversarial joint distribution of uncertain parameters. In this paper, we study distributionally robust MDP where ambiguity sets for uncertain parameters are of a format that can easily incorporate in its description the uncertainty's statistical information estimated from historical data. In this way, we generalize existing works on distributionally robust Markov Decision Process with generalized-moment-based ambiguity sets and statistical-distance-based ambiguity sets to incorporate information from the former class such as moments and dispersions to the latter class that critically depend on samples. We show that, under this format of ambiguity sets, the resulting distributionally robust Markov Decision Process remains tractable under mild technical conditions. To be more specific, a distributionally robust policy can be constructed by solving a collection of one-stage convex optimization subproblems.
Publisher URL: http://arxiv.org/abs/1801.04745