Abstract : Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and performance. In this paper, we propose a lightweight alternative algorithm optimizing proper scoring rules from a very broad set, and establish formal convergence rates under the boosting framework that surprisingly compete with those known for UNN. It is an adaptive Newton-Raphson algorithm, which belongs to the same lineage as the popular Gentle Adaboost. To the best of our knowledge, no such boosting-compliant convergence rates were previously known for these algorithms. We provide experiments on a dozen domains, including the challenging Caltech and SUN computer vision databases. They display that GNNB significantly outperforms UNN, both in terms of convergence rate and quality of the solution obtained, and GNNB provides a simple and efficient contender to techniques that can be used on very large domains, like stochastic gradient descent -- for which little is known to date. Experiments include a divide-and-conquer improvement of GNNB which exploits the link with proper scoring rules optimization.