16 124 818 livres à l’intérieur 175 langues
2 047 051 livres numériques à l’intérieur 101 langues
Cela ne vous convient pas ? Aucun souci à se faire ! Vous pouvez renvoyer le produit dans les 30 jours
Impossible de faire fausse route avec un bon d’achat. Le destinataire du cadeau peut choisir ce qu'il veut parmi notre sélection.
Politique de retour sous 30 jours
In this thesis, a dynamic theory of learning, also§called ``online learning'' in computer science, §is presented as stochastic approximations of the§regression function from reproducing kernel Hilbert§spaces (RKHS). It starts from a probability measure§on an input-output space, with sequential sampling in§an independent and identically distributed way.§Online learning algorithms recursively exploit§samples as a departure from the ``batch learning''§which has an access to all data once. The algorithms§are based on stochastic approximations of the§regression function from RKHS. Novel probabilistic§exponential inequalities in Hilbert spaces from§Russian school are exploited to study some martingale§or reverse martingale expansions of the error. Tight§probabilistic upper bounds are obtained in the sense§that in certain range of complexity classes, online§learning algorithms achieve the same convergence§rates as batch learning, and thus asymptotically§reach the optimal rates in some senses.