Abstract
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, the learner deploys a classifier, then an adversarially chosen agent arrives and possibly manipulates her features to optimally respond to the learner's choice of classifier. The learner has no knowledge of the agents' utility functions or “real” features, which may vary widely across agents. Instead, the learner is only able to observe their “revealed preferences”, i.e., the manipulated feature vectors they provide. For a broad family of agent cost functions, we give a computationally efficient learning algorithm that is able to obtain diminishing “Stackelberg regret” - a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight, even allowing for the fact that agents would have best-responded differently to the optimal classifier.
Original language | English (US) |
---|---|
Title of host publication | ACM EC 2018 - Proceedings of the 2018 ACM Conference on Economics and Computation |
Publisher | Association for Computing Machinery, Inc |
Pages | 55-70 |
Number of pages | 16 |
ISBN (Electronic) | 9781450358293 |
DOIs | |
State | Published - Jun 11 2018 |
Event | 19th ACM Conference on Economics and Computation, EC 2018 - Ithaca, United States Duration: Jun 18 2018 → Jun 22 2018 |
Publication series
Name | ACM EC 2018 - Proceedings of the 2018 ACM Conference on Economics and Computation |
---|
Other
Other | 19th ACM Conference on Economics and Computation, EC 2018 |
---|---|
Country/Territory | United States |
City | Ithaca |
Period | 6/18/18 → 6/22/18 |
Bibliographical note
Publisher Copyright:© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keywords
- Online learning
- Revealed preferences
- Stackelberg regret
- Strategic agents
- Strategic classification