# Algorithmic Identification of Probabilities Is Hard

1 ESCAPE - Systèmes complexes, automates et pavages
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier
Abstract : Suppose that we are given an infinite binary sequence which is random for a Bernoulli measure of parameter p. By the law of large numbers, the frequency of zeros in the sequence tends to p, and thus we can get better and better approximations of p as we read the sequence. We study in this paper a similar question, but from the viewpoint of inductive inference. We suppose now that p is a computable real, and one asks for more: as we are reading more and more bits of our random sequence, we have to eventually guess the exact parameter p (in the form of its Turing code). Can one do such a thing uniformly for all sequences that are random for computable Bernoulli measures, or even for a 'large enough' fraction of them? In this paper, we give a negative answer to this question. In fact, we prove a very general negative result which extends far beyond the class of Bernoulli measures.
Keywords :
Type de document :
Communication dans un congrès
ALT: Algorithmic Learning Theory, Oct 2014, Bled, Slovenia. LNCS (8776), pp.85-95, 2014, Algorithmic Learning Theory. 〈10.1007/978-3-319-11662-4_7〉
Domaine :

Littérature citée [5 références]

https://hal.archives-ouvertes.fr/hal-01397246
Contributeur : Benoit Monin <>
Soumis le : jeudi 17 novembre 2016 - 15:14:08
Dernière modification le : jeudi 24 mai 2018 - 15:59:23
Document(s) archivé(s) le : jeudi 16 mars 2017 - 16:31:16

### Fichier

paper_learning_measures_remast...
Fichiers produits par l'(les) auteur(s)

### Citation

Laurent Bienvenu, Benoit Monin, Alexander Shen. Algorithmic Identification of Probabilities Is Hard. ALT: Algorithmic Learning Theory, Oct 2014, Bled, Slovenia. LNCS (8776), pp.85-95, 2014, Algorithmic Learning Theory. 〈10.1007/978-3-319-11662-4_7〉. 〈hal-01397246〉

### Métriques

Consultations de la notice

## 202

Téléchargements de fichiers