Approximation of Lorenz-Optimal Solutions in Multiobjective Markov Decision Processes

Patrice Perny, Paul Weng, Judy Goldsmith, Josiah P. Hanna

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper is devoted to fair optimization in Multiobjective Markov Decision Processes (MOMDPs). A MOMDP is an extension of the MDP model for planning under uncertainty while trying to optimize several reward functions simultaneously. This applies to multiagent problems when rewards define individual utility functions, or in multicriteria problems when rewards refer to different features. In this setting, we study the determination of policies leading to Lorenz-non-dominated tradeoffs. Lorenz dominance is a refinement of Pareto dominance that was introduced in Social Choice for the measurement of inequalities. In this paper, we introduce methods to efficiently approximate the sets of Lorenz-non-dominated solutions of infinite-horizon, discounted MOMDPs. The approximations are polynomial-sized subsets of those solutions.
Original languageEnglish
Title of host publicationProceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence
Place of PublicationArlington, Virginia, USA
PublisherAUAI Press
Pages508–517
Publication statusPublished - 11 Aug 2013
EventTwenty-Ninth Conference Conference on Uncertainty in Artificial Intelligence - Bellevue, United States
Duration: 11 Jul 201315 Jul 2013
http://auai.org/uai2013/

Conference

ConferenceTwenty-Ninth Conference Conference on Uncertainty in Artificial Intelligence
Abbreviated titleUAI 2013
Country/TerritoryUnited States
CityBellevue
Period11/07/1315/07/13
Internet address

Fingerprint

Dive into the research topics of 'Approximation of Lorenz-Optimal Solutions in Multiobjective Markov Decision Processes'. Together they form a unique fingerprint.

Cite this