Quickly boosting decision trees ? pruning underachieving features early, Proceedings of the 30th International Conference on Machine Learning (ICML), 2013. ,
Best Arm Identification in Multi-armed Bandits, Proceedings of the 23rd Conference on Learning Theory, 2010. ,
URL : https://hal.archives-ouvertes.fr/hal-00654404
Bandit problems with infinitely many arms, The Annals of Statistics, vol.25, issue.5, pp.2103-2116, 1997. ,
DOI : 10.1214/aos/1069362389
URL : http://doi.org/10.1214/aos/1069362389
Two-target algorithms for infinite-armed bandits with bernoulli rewards, Advances in Neural Information Processing Systems (NIPS), 2013. ,
URL : https://hal.archives-ouvertes.fr/hal-00920045
Pure exploration in finitely-armed and continuous-armed bandits, Theoretical Computer Science, vol.412, issue.19, pp.1832-18521832, 2011. ,
DOI : 10.1016/j.tcs.2010.12.059
URL : https://hal.archives-ouvertes.fr/hal-00609550
X-armed bandits, Journal of Machine Learning Research, vol.12, pp.1587-1627, 2011. ,
URL : https://hal.archives-ouvertes.fr/hal-00450235
Optimal Adaptive Policies for Sequential Allocation Problems, Advances in Applied Mathematics, vol.17, issue.2, pp.122-142, 1996. ,
DOI : 10.1006/aama.1996.0007
URL : https://doi.org/10.1006/aama.1996.0007
Fast boosting using adversarial bandits, Proceedings of the 27th International Conference on Machine Learning (ICML), 2010. ,
URL : https://hal.archives-ouvertes.fr/in2p3-00614564
Simple regret for infinitely many armed bandits, 2015. ,
URL : https://hal.archives-ouvertes.fr/hal-01153538
Finding the most biased coin with fewest flips. CoRR, abs/1202, 2012. ,
Pac identification of a bandit arm relative to a reward quantile, AAAI, 2017. ,
Infinitely Many-Armed Bandits with Unknown Value Distribution, European Conference, ECML PKDD, pp.307-322, 2014. ,
DOI : 10.1007/978-3-662-44848-9_20
Adaptive sampling for large scale boosting, J. Mach. Learn. Res, vol.15, issue.1, pp.1431-1453, 2014. ,
Using lazyboosting for word sense disambiguation, The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems, 2001. ,
Action Elimination and Stopping Conditions for the Multi- Armed Bandit and Reinforcement Learning Problems, Journal of Machine Learning Research, vol.7, pp.1079-1105, 2006. ,
Experiments with a new boosting algorithm, Proceedings of the Thirteenth International Conference on International Conference on Machine Learning, ICML'96, pp.148-156, 1996. ,
Best arm identification: A unified approach to fixed budget and fixed confidence, Advances in Neural Information Processing Systems (NIPS). 2012 ,
URL : https://hal.archives-ouvertes.fr/hal-00772615
Optimal best arm identification with fixed confidence, Proceedings of the 29th Conference On Learning Theory, 2016. ,
URL : https://hal.archives-ouvertes.fr/hal-01273838
Black-box optimization of noisy functions with unknown smoothness, Advances on Neural Information Processing Systems (NIPS), 2015. ,
URL : https://hal.archives-ouvertes.fr/hal-01222915
The Power of Adaptivity in Identifying Statistical Alternatives, Advances on Neural Information Processing Systems (NIPS), 2016. ,
PAC subset selection in stochastic multi-armed bandits, Proceedings of the 29th International Conference on Machine Learning, p.2012 ,
Almost optimal exploration in multi-armed bandits, Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013. ,
Learning the distribution with largest mean: two bandit frameworks, ESAIM: Proceedings and Surveys, vol.60, 2017. ,
DOI : 10.1051/proc/201760114
URL : https://hal.archives-ouvertes.fr/hal-01449822
Information complexity in bandit subset selection, Proceeding of the 26th Conference On Learning Theory, 2013. ,
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models, Journal of Machine Learning Research, vol.17, issue.1, pp.1-42, 2016. ,
URL : https://hal.archives-ouvertes.fr/hal-01024894
Multi-armed bandits in metric spaces, Proceedings of the fourtieth annual ACM symposium on Theory of computing, STOC 08, 2008. ,
DOI : 10.1145/1374376.1374475
URL : http://www.cs.cornell.edu/~rdk/papers/bandits-lip.pdf
Asymptotically efficient adaptive allocation rules, Advances in Applied Mathematics, vol.6, issue.1, pp.4-22, 1985. ,
DOI : 10.1016/0196-8858(85)90002-8
URL : https://doi.org/10.1016/0196-8858(85)90002-8
Lipschitz Bandits: Regret lower bounds and optimal algorithms, Proceedings on the 27th Conference On Learning Theory, 2014. ,
URL : https://hal.archives-ouvertes.fr/hal-01092791
The sample complexity of exploration in the multi-armed bandit problem, Journal of Machine Learning Research, vol.5, 2004. ,
On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, vol.25, issue.34, pp.285-294, 1933. ,
Algorithms for infinitely many-armed bandits, Advances in Neural Information Processing Systems, 2009. ,