. De-nombreuses-pistes, utiliser des méta-attributs caractérisant des attributs particuliers des jeux de données, mais diverses expériences [28, 5] ont montré que les propriétés d'un attribut dans le contexte des autres attributs sont au moins aussi importantes Il serait alors intéressant de permettre l'utilisation de tels méta-attributs relationnels, comme la covariance, ou l'information mutuelle, par la dissimilarité. D'autre part, bien que divers et provenant d'approches très différentes, les méta-attributs employés dans nos expériences ne couvrent pascompì etement l'´ etat de l'art en lamatì ere. Cesdernì eres années ontétéontété riches en contributions introduisant de nouveaux méta-attributs [29, 15, 27, 35], dont l'utilisation pourrait révéler l'intérêt d'une approche par dissimilarité dans de nouveaux contextes. Enfin, comme l'efficacité de l'approche par dissimilarité appara??tappara??t très dépendante du contexte (comme c'est souvent le cas en apprentissage et méta-apprentissage), il pourraitêtrepourraitêtre intéressant de concevoir une méthode d'´ evaluation de méta-attributs considérant leurs diverses natures (globaux

. Dans-le-cadre-de, analyse de données, un atout particulier de notre approche est qu'elle permet une caractérisation unifiée des expériences d'analyse de données. En effet, disposant d'une quelconque représentation du processus d'analyse de données et de ses résultats, il est possible de l'intégrer dans la dissimilarité, permettant ainsi la comparaison directe d'expériencescompì etes. Il s'agit l` a d'un premier pas vers de nouvelle approches d'assistance intelligentè a l'analyse de données

]. H. Bibliographie and . Akaike, A new look at the statistical model identification, IEEE Transactions on Automatic Control, vol.19, issue.16, pp.716-723, 1974.

G. Batista and D. F. Silva, How k-nearest neighbor parameters affect its performance, Argentine Symposium on Artificial Intelligence, pp.1-12, 2009.

P. Brazdil, B. Gama, and . Henery, Characterizing the applicability of classification algorithms using meta-level learning, European conference on machine learning, pp.83-102, 1994.
DOI : 10.1007/3-540-57868-4_52

L. Breiman, Random forests, Machine Learning, vol.45, issue.1, pp.5-32, 2001.
DOI : 10.1023/A:1010933404324

G. Brown, A. Pocock, M. Zhao, and M. Luján, Conditional likelihood maximisation : a unifying framework for information theoretic feature selection, The Journal of Machine Learning Research, vol.13, issue.1, pp.27-66, 2012.

G. John, . Cleary, E. Leonard, and . Trigg, An instance-based learner using an entropic distance measure, Proceedings of the 12th International Conference on Machine learning, pp.108-114, 1995.

J. Cohen, Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit., Psychological Bulletin, vol.70, issue.4, p.213, 1968.
DOI : 10.1037/h0026256

L. Dong, E. Frank, and S. Kramer, Ensembles of Balanced Nested Dichotomies for Multi-class Problems, Knowledge Discovery in Databases : PKDD 2005, pp.84-95, 2005.
DOI : 10.1007/11564126_13

A. Frank, On Kuhn's Hungarian Method?A tribute from Hungary, Naval Research Logistics, vol.5, issue.1, pp.2-5, 2005.
DOI : 10.1002/nav.20056

E. Frank, Fully supervised training of gaussian radial basis function networks in weka, 2014.

E. Frank, Y. Wang, S. Inglis, G. Holmes, H. Ian et al., Using model trees for classification, Machine Learning, pp.63-76, 1998.

J. Fürnkranz and J. Petrak, Extended data characteristics, 2002.

C. Giraud-carrier, R. Vilalta, and P. Brazdil, Introduction to the Special Issue on Meta-Learning, Machine Learning, vol.54, issue.3, pp.187-193, 2004.
DOI : 10.1023/B:MACH.0000015878.60765.42

M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann et al., The WEKA data mining software, ACM SIGKDD Explorations Newsletter, vol.11, issue.1
DOI : 10.1145/1656274.1656278

K. Tin, M. Ho, and . Basu, Complexity measures of supervised classification problems. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.24, issue.3, pp.289-300, 2002.

G. Holmes, M. Hall, and E. Prank, Generating Rule Sets from Model Trees, Australasian Joint Conference on Artificial Intelligence, pp.1-12, 1999.
DOI : 10.1007/3-540-46695-9_1

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.148.9721

A. Kalousis, Algorithm selection via meta-learning, 2002.

A. Kalousis, J. Gama, and M. Hilario, On Data and Algorithms: Understanding Inductive Performance, Machine Learning, pp.275-312, 2004.
DOI : 10.1023/B:MACH.0000015882.38031.85

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.6813

A. Kalousis and M. Hilario, Model selection via metalearning : a comparative study, International Journal on Artificial Intelligence Tools, vol.10, issue.04, pp.525-554, 2001.
DOI : 10.1109/tai.2000.889901

A. Kalousis and M. Hilario, Feature selection for metalearning, Proceedings of the 5th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD '01, pp.222-233, 2001.

I. Kononenko and I. Bratko, Information-based evaluation criterion for classifier's performance, Machine Learning, pp.67-80, 1991.
DOI : 10.1007/BF00153760

URL : https://repozitorij.uni-lj.si/Dokument.php?id=49317

W. Harold and . Kuhn, The hungarian method for the assignment problem, Naval research logistics quarterly, vol.2, issue.12, pp.83-97, 1955.

R. Leite, P. Brazdil, and J. Vanschoren, Selecting Classification Algorithms with Active Testing, Machine Learning and Data Mining in Pattern Recognition, pp.117-131, 2012.
DOI : 10.1007/978-3-642-31537-4_10

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.416.9998

E. Leyva, A. Gonzalez, and R. Perez, A set of complexity measures designed for applying meta-learning to instance selection. Knowledge and Data Engineering, IEEE Transactions on, vol.27, issue.2, pp.354-367, 2015.

J. David and . Mackay, Introduction to gaussian processes, NATO ASI Series F Computer and Systems Sciences, vol.168, pp.133-166, 1998.

I. Ntoutsi, A. Kalousis, and Y. Theodoridis, A general framework for estimating similarity of datasets and decision trees: exploring semantic similarity of decision trees, SDM, pp.810-821, 2008.
DOI : 10.1137/1.9781611972788.73

H. Peng, F. Long, and C. Ding, Feature selection based on mutual information criteria of max-dependency, max-relevance, and minredundancy . Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.27, issue.8, pp.1226-1238, 2005.

Y. Peng, A. Peter, P. Flach, C. Brazdil, and . Soares, Decision tree-based data characterization for meta-learning. IDDM-2002, p.111, 2002.

B. Pfahringer, H. Bensusan, and C. Giraud-carrier, Tell me who can learn you and i can tell you who you are : Landmarking various learning algorithms, Proceedings of the 17th international conference on machine learning, pp.743-750, 2000.

F. Serban, Toward effective support for data mining using intelligent discovery assistance, 2013.

J. Smid, Computational Intelligence Methods in Metalearning, 2016.

N. Smirnov, Table for Estimating the Goodness of Fit of Empirical Distributions, The Annals of Mathematical Statistics, vol.19, issue.2, pp.279-281, 1948.
DOI : 10.1214/aoms/1177730256

J. Alex, B. Smola, and . Schölkopf, A tutorial on support vector regression, Statistics and computing, vol.14, issue.3, pp.199-222, 2004.

Q. Sun and B. Pfahringer, Pairwise meta-rules for better meta-learning-based algorithm ranking, Machine Learning, vol.1, issue.1, pp.141-161, 2013.
DOI : 10.1007/s10994-013-5387-y

URL : http://researchcommons.waikato.ac.nz/bitstream/10289/7823/1/qs-mlj13.pdf

Q. Sun, B. Pfahringer, and M. Mayo, Full model selection in the space of data mining operators, Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference companion, GECCO Companion '12, pp.1503-1504, 2012.
DOI : 10.1145/2330784.2331014

L. Todorovski, P. Brazdil, and C. Soares, Report on the experiments with feature selection in meta-level learning decision support, meta-learning and ILP : forum for practical problem presentation and prospective solutions, Proceedings of the PKDD-00 workshop on data mining, pp.27-39, 2000.

T. Uno, T. Asai, Y. Uchida, and H. Arimura, An Efficient Algorithm for Enumerating Closed Patterns in Transaction Databases, Discovery science, pp.16-31, 2004.
DOI : 10.1007/978-3-540-30214-8_2

J. Vanschoren, H. Blockeel, B. Pfahringer, and G. Holmes, Experiment databases, Machine Learning, pp.127-158, 2012.
DOI : 10.1007/978-1-4419-7738-0_14

R. Vilalta and Y. Drissi, A perspective view and survey of metalearning, Artificial Intelligence Review, vol.18, issue.2, pp.77-95, 2002.
DOI : 10.1023/A:1019956318069

L. Wang, M. Sugiyama, C. Yang, K. Hatano, and J. Feng, Theory and Algorithm for Learning with Dissimilarity Functions, Neural Computation, vol.8, issue.5, pp.1459-1484, 2009.
DOI : 10.1145/954339.954342

M. Wistuba, N. Schilling, and L. Schmidt-thieme, Learning data set similarities for hyperparameter optimization initializations, MetaSel@ PKDD/ECML, pp.15-26, 2015.
DOI : 10.1109/dsaa.2015.7344817

H. David and . Wolpert, The lack of a priori distinctions between learning algorithms, Neural computation, vol.8, issue.7, pp.1341-1390, 1996.

H. David, . Wolpert, G. William, and . Macready, No free lunch theorems for optimization, Evolutionary Computation IEEE Transactions on, vol.1, issue.1, pp.67-82, 1997.

B. Andy, . Yoo, A. Morris, M. Jette, and . Grondona, Slurm : Simple linux utility for resource management, Job Scheduling Strategies for Parallel Processing, pp.44-60, 2003.

M. Zakova, P. Kremen, F. Zelezny, and N. Lavrac, Automating Knowledge Discovery Workflow Composition Through Ontology-Based Planning, IEEE Transactions on Automation Science and Engineering, vol.8, issue.2, pp.253-264, 2011.
DOI : 10.1109/TASE.2010.2070838