A difficult problem for artificial intelligence: how to assess originality of scientific research and the dangers of apostrophes in family names
Résumé
A set of humans (as every group of primates, [6]) has a behavior which is ruled by complex factors. As already remarked by Asimov ([11]) the behavior of a group of humans may change simply if some of its members know about the existence of a theory developed for describing it. Hence, the introduction of (algorithmic) procedures aimed to measure the quality of the research of scientists (e.g. bibliometric indices) could be eventually useful only if they were kept secret. The present paper intends to attract the attention of the scientists working in artificial intelligence to an emerging issue in science whose impact could be enormous: it is needed a robot who, while being intrinsically honest (maybe constructed to respect the Asimov's three laws of robotics), can measure the scientific value of published papers. However to have a fair use of these measures, the robots in charge should not divulge the greatest part of the details of the algorithm used. Until robotic fair assessment of the quality of scientific research will not be feasible, (i.e. until the Asimov's and Turing's dream of having available intelligent and fundamentally honest robots will not become reality) the method (established during Middle Age) of peers' co-optation and evaluation remains the only effective one for selecting the most appropriate candidates to any academic positions. This co-optation process must be performed by following some well-known procedures which seem to have been forgotten.
Domaines
Sciences de l'ingénieur [physics]
Origine : Fichiers produits par l'(les) auteur(s)
Loading...