. Notre-variante-profonde, employant plusieurs couches cachées, permet d'apprendre séparément les représentations internes des entrées et leurs interactions. Les résultats obtenus sur les deux tâches ATIS et MEDIA sont à l'état-de-l'art, et confirment l

B. Y. Ducharme-r and V. P. Jauvin-c, A neural probabilistic language model, Journal of Machine Learning Research, vol.3, pp.1137-1155, 2003.

M. H. Bonneau-, A. C. Bechet-f, D. A. Kuhn-a, M. D. Lefèvre-f, R. S. Qugnard-m et al., Results of the french evaldamedia evaluation campaign for literal understanding, LREC, pp.2054-2059, 2006.
URL : https://hal.archives-ouvertes.fr/hal-01160167

C. K. Van-merrienboer-b, . Gülçehre-Ç, . Bougares-f, . Schwenk-h, and . Bengio-y, Learning phrase representations using RNN encoder-decoder for statistical machine translation, 2014.

C. R. Weston, A unified architecture for natural language processing : Deep neural networks with multitask learning, Proceedings ICML, pp.160-167, 2008.

D. D. Bates-m, . Brown-m, . Fisher-w, P. K. Hunicke-smith, R. A. Pao-c et al., Expanding the scope of the atis task : The atis-3 corpus, Proceedings of HLT Workshop : ACL, 1994.

D. M. Tellier-i, Etude des reseaux de neurones recurrents pour etiquetage de sequences, Actes de la 23eme conference sur le Traitement Automatique des Langues Naturelles, 2016.

D. M. Tellier-i, Improving recurrent neural networks for sequence labelling, 2016.

D. M. Tellier-i, New recurrent neural network variants for sequence labeling, Proceedings of the 17th International Conference on Intelligent Text Processing and Computational Linguistics, 2016.

D. Y. and D. M. Tellier-i, Label-dependencies aware recurrent neural networks, Proceedings of the 18th International Conference on Computational Linguistics and Intelligent Text Processing, 2017.

H. S. Dinarelli-m, R. C. Lefèvre-f, . Lehen-p, M. A. De-mori-r, and N. H. Riccardi-g, Comparing stochastic approaches to spoken language understanding in multiple languages, IEEE TASLP, p.99, 2010.

H. K. Zhang-x and R. S. Sun, Delving deep into rectifiers : Surpassing human-level performance on imagenet classification, IEEE ICCV, pp.1026-1034, 2015.

H. G. Deng-l, Y. D. , M. A. , J. N. Senior-a, . Vanhoucke-v et al., Deep neural networks for acoustic modeling in speech recognition, IEEE Signal Processing Magazine, issue.6, pp.29-82, 2012.

H. S. Schmidhuber-j, Long short-term memory, Neural Comput, vol.9, issue.8, pp.1735-1780, 1997.

H. Z. Xu-w and . Yu-k, Bidirectional lstm-crf models for sequence tagging. arXiv preprint, 2015.

L. J. Mccallum-a and . Pereira-f, Conditional random fields : Probabilistic models for segmenting and labeling sequence data, Proceedings of ICML, pp.282-289, 2001.

L. G. Ballesteros-m, . Subramanian-s, . Kawakami-k, and . Dyer-c, Neural architectures for named entity recognition, 2016.

M. X. Hovy-e, End-to-end sequence labeling via bi-directional lstm-cnns-crf, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.

M. G. He-x, . Deng-l, and . Bengio-y, Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding, 2013.

M. T. Karafiát-m, C. J. Burget-l, and . Khudanpur-s, Recurrent neural network based language model, INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, pp.1045-1048, 2010.

M. T. , K. S. Burget-l, and C. J. Khudanpur-s, Extensions of recurrent neural network language model, ICASSP, pp.5528-5531, 2011.

V. V. and R. C. Gravier-g, Is it time to switch to word embedding and recurrent neural networks for spoken language understanding ? In InterSpeech, 2015.

V. V. and R. C. Gravier-g, A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding, Interspeech, 2016.