Weakly supervised parsing with rules

Abstract : This work proposes a new research direction to address the lack of structures in traditional n-gram models. It is based on a weakly supervised dependency parser that can model speech syntax without relying on any annotated training corpus. La- beled data is replaced by a few hand-crafted rules that encode basic syntactic knowledge. Bayesian inference then samples the rules, disambiguating and combining them to create complex tree structures that maximize a discriminative model's posterior on a target unlabeled corpus. This posterior encodes sparse se- lectional preferences between a head word and its dependents. The model is evaluated on English and Czech newspaper texts, and is then validated on French broadcast news transcriptions.
Type de document :
Communication dans un congrès
INTERSPEECH 2013, Aug 2013, Lyon, France. pp.2192-2196, 2013
Liste complète des métadonnées

Littérature citée [26 références]  Voir  Masquer  Télécharger

Contributeur : Christophe Cerisara <>
Soumis le : vendredi 6 septembre 2013 - 07:00:04
Dernière modification le : mardi 18 décembre 2018 - 16:38:01
Document(s) archivé(s) le : samedi 7 décembre 2013 - 04:14:25


Fichiers éditeurs autorisés sur une archive ouverte


  • HAL Id : hal-00850437, version 1



Christophe Cerisara, Alejandra Lorenzo, Pavel Kral. Weakly supervised parsing with rules. INTERSPEECH 2013, Aug 2013, Lyon, France. pp.2192-2196, 2013. 〈hal-00850437〉



Consultations de la notice


Téléchargements de fichiers