HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research papers, whether they are published or not, and for PhD dissertation. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

New submissions
Chemical Sciences

Cognitive science

Computer Science

Engineering Sciences

Environmental Sciences

Humanities and Social Sciences

Life Sciences

Mathematics

Nonlinear Sciences

Physics

Quantitative Finance

Sciences of the Universe

Statistics

The functionalisation of a Si(100) silicon wafer allows for the oriented grafting of a monolayer of Mn12 nanomagnets using a two-step procedure

In situ generation of indium catalyst droplets and subsequent growth of crystalline silicon nanowires on ITO by plasma-enhanced CVD are reported, and the wurtzite (Si-IV) phase is clearly evidenced in some wires

The chemistry of aryldiazonium salts has been thoroughly used in recent years to graft in a very simple and robust way ultrathin polyphenylene-like films on a broad range of surfaces. We show here that the same chemistry can be used to obtain self-adhesive surfaces. This target was reached in a simple way by coating various surfaces with chemisorbed organic films containing active aryldiazonium salts. These self-adhesive surfaces are then put into contact with various species (molecules, polymers, nanoparticles, nanotubes, graphene flakes, etc.) that react either spontaneously or under activation with the immobilized aryldiazonium salts. Our self-adhesive surfaces were synthesized following a simple aqueous two-step protocol based on p-phenylenediamine diazotisation. The first diazotisation step results in the robust grafting of thin polyaminophenylene (PAP) layers onto the surface. The second diazotisation step changed the grafted PAP film into a poly-aryldiazonium polymer (PDP) film. The covalent grafting between those self-adhesive surfaces and the target species was achieved by direct contact or by immersion of the self-adhesive surfaces in solution. We present in this preliminary work the grafting of multi-wall carbon nanotubes (MWCNTs), flakes of highly oriented pyrolytic graphite (HOPG), various organic compounds and copper nanoparticles. We also tested these immobilized aryldiazonium salts as electropolymerization initiators for the grafting-to process.

It is intuitively felt that visual cues should enhance online communication, and this experimental study aims to test this prediction by exploring the value provided by a webcam in an online L2 pedagogical teacher-to learner interaction. A total of 40 French undergraduate students with a B2 level in English were asked to describe in English four previously unseen photographs to a native English-speaking teacher of EFL via Skype, a free web-based videoconferencing tool, during a 10-minute interaction. Twenty students were assigned to the videoconferencing condition and 20 to the audioconferencing condition. All 40 interactions were recorded using dynamic screen capture software and were analyzed with ELAN, a sound and video annotation tool. Participants' perceptions of the online interaction are first compared with regard to the issues of social presence and their understanding and appreciation of the online interaction, using data gathered from a post-task questionnaire. The study then explores whether seeing the interlocutor's image impacts on the patterns of these synchronous exchanges and on the word search episodes. Results indicated that the impact of the webcam on the online pedagogical interaction was not as critical as had been predicted.

This paper investigates the evolution of the computational linguistics domain through a quantitative analysis of the ACL Anthology (containing around 12,000 papers published between 1985 and 2008). Our approach combines complex system methods with natural language processing techniques. We reconstruct the socio-semantic landscape of the domain by inferring a co-authorship and a semantic network from the analysis of the corpus. First, keywords are extracted using a hybrid approach mixing linguistic patterns with statistical information. Then, the semantic network is built using a co-occurrence analysis of these keywords within the corpus. Combining temporal and network analysis techniques, we are able to examine the main evolutions of the field and the more active subfields over time. Lastly we propose a model to explore the mutual influence of the social and the semantic network over time, leading to a socio-semantic co-evolutionary system.

One characteristic feature of Celtic languages is mutation, i.e. the fact that the initial consonant of words may change according to the context. We provide a quick description of this linguistic phenomenon for Breton along with a formalization using finite state transducers. This approach allows an exact and compact description of mutations. The result can be used in various contexts, especially for spell checking and language teaching.

In this paper, we propose a hybrid decision algorithm for the selection of the access in multi-operator networks environment, where competing operators share their radio access networks to meet traffic and data rate demands. The proposed algorithm guarantees the user satisfaction and a global gain for all cooperating operators. Simulation results prove the efficiency of the proposed scheme and show that the cooperation between operators achieves benefits to both users and operators; user acceptance as well as the operator resource utilization and the operator revenue increase.

In this paper, we perform a business analysis of our hybrid decision algorithm for the selection of the access in a multi-operator networks environment. We investigate the ability of the operator to express his strategy and influence the access selection for his client. In this purpose, we study two important coefficients of the previously proposed cost function, Wu and Wop, and show that the value of these coefficients is not arbitrary. Simulation results show that the value of the ratio Wu/Wop enables a selection decision respecting operator's strategy and it affects the achieved global profit for all cooperating operators.

In a previous paper, we defined both a unified formal framework based on L.-S. Barbosa's components for modeling complex software systems, and a generic formalization of integration rules to combine their behavior. In the present paper, we propose to continue this work by proposing a variant of first-order fixed point modal logic to express both components and systems requirements. We establish the important property for this logic to be adequate with respect to bisimulation. We then study the conditions to be imposed to our logic (characterization of sub-families of formulas) to preserve properties along integration operators, and finally show correctness by construction results. The complexity of computing systems results in the definition of formal means to manage their size. To deal with this issue, we propose an abstraction (resp. simulation) of components by components. This enables us to build systems and check their correctness in an incremental way.

The paper deals with the accuracy of guaranteed error bounds on outputs of interest computed from approximate methods such as the finite element method. A considerable improvement is introduced for linear problems thanks to new bounding techniques based on Saint-Venant's principle. The main breakthrough of these optimized bounding techniques is the use of properties of homothetic domains which enables to cleverly derive guaranteed and accurate boundings of contributions to the global error estimate over a local region of the domain. Performances of these techniques are illustrated through several numerical experiments.

In the context of global/goal-oriented error estimation applied to computational mechanics, the need to obtain reliable and guaranteed bounds on the discretization error has motivated the use of residual error estimators. These estimators require the construction of admissible stress fields verifying the equilibrium exactly. This article focuses on a recent method, based on a flux-equilibration procedure and called the element equilibration + star-patch technique (EESPT), that provides for such stress fields. The standard version relies on a strong prolongation condition in order to calculate equilibrated tractions along finite element boundaries. Here, we propose an enhanced version, which is based on a weak prolongation condition resulting in a local minimization of the complementary energy and leads to optimal tractions in selected regions. Geometric and error estimate criteria are introduced to select the relevant zones for optimizing the tractions. We demonstrate how this optimization procedure is important and relevant to produce sharper estimators at affordable computational cost, especially when the error estimate criterion is used. Two- and three-dimensional numerical experiments demonstrate the efficiency of the improved technique.

Robust global/goal-oriented error estimation is used nowadays to control the approximate finite element solutions obtained from simulation. In the context of Computational Mechanics, the construction of admissible stress fields (\ie stress tensors which verify the equilibrium equations) is required to set up strict and guaranteed error bounds (using residual based error estimators) and plays an important role in the quality of the error estimates. This work focuses on the different procedures used in the calculation of admissible stress fields, which is a crucial and technically complicated point. The three main techniques that currently exist, called the element equilibration technique (EET), the star-patch equilibration technique (SPET), and the element equilibration + star-patch technique (EESPT), are investigated and compared with respect to three different criteria, namely the quality of associated error estimators, computational cost and easiness of practical implementation into commercial finite element codes. The numerical results which are presented focus on industrial problems; they highlight the main advantages and drawbacks of the different methods and show that the behavior of the three estimators, which have the same convergence rate as the exact global error, is consistent. Two- and three-dimensional experiments have been carried out in order to compare the performance and the computational cost of the three different approaches. The analysis of the results reveals that the SPET is more accurate than EET and EESPT methods, but the corresponding computational cost is higher. Overall, the numerical tests prove the interest of the hybrid method EESPT and show that it is a correct compromise between quality of the error estimate, practical implementation and computational cost. Furthermore, the influence of the cost function involved in the EET and the EESPT is studied in order to optimize the estimators.

Knowledge of the soil water retention curve (SWRC) is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs), which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972), Lal (1979), Aina & Periaswamy (1985), Arruda et al. (1987), Dijkerman (1988), Vereecken et al. (1989), Batjes (1996), van den Berg et al. (1997), Tomasella et al. (2000), Hodnett & Tomasella (2002), Oliveira et al. (2002), and Barros (2010). We used a database that includes soil texture (sand, silt, and clay), bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000) achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980) model, especially when tested in the top soil layer. O conhecimento da curva de retenção de água (CRA) é essencial para compreender e modelar os processos hidráulicos no solo. No entanto, a determinação direta do CRA consome tempo, e o custo é alto. Além disso, é necessário grande número de amostras, em razão da elevada variabilidade espacial e temporal das propriedades hidráulicas do solo. Uma alternativa é o uso de modelos, que são chamados de funções de pedotransferência (FPT), que estimam a CRA por meio de propriedades do solo de fácil determinação. O objetivo deste estudo foi testar a acurácia de 16 FPT, pontuais ou paramétricas, existentes na literatura, em diferentes solos do sul e sudeste do Estado do Pará, Brasil. As FPT testadas foram propostas por Pidgeon (1972), Lal (1979), Aina & Periaswamy (1985), Arruda et al. (1987), Dijkerman (1988), Vereecken et al. (1989), Batjes (1996), van den Berg et al. (1997), Tomasella et al. (2000), Hodnett & Tomasella (2002), Oliveira et al. (2002) e Barros (2010). Utilizou-se um banco de dados contendo textura (areia, silte e argila), densidade do solo, carbono orgânico, pH do solo, capacidade de troca catiônica e CRA. A maioria das FPT testadas não demonstrou boa acurácia para estimar as CRA. As FPT paramétricas apresentaram melhor desempenho do que as FPT pontuais em estimar a CRA dos solos na região. Entre as FPT paramétricas, as propostas por Tomasella et al. (2000) obtiveram melhor acurácia em estimar os parâmetros empíricos do modelo de van Genuchten (1980), principalmente, quando testadas na primeira camada do solo.

Le présent travail se donne pour but d'effectuer un bilan des aménagements littoraux et de leur rôle dans la dynamique sédimentaire côtière actuelle. À ce titre, la partie orientale du delta du Rhône représente un excellent laboratoire d'étude ; de part et d'autre de l'embouchure du fleuve, la majeure partie du littoral est aujourd'hui aménagée. Les infrastructures présentent une grande hétérogénéité dans l'âge et les méthodes d'aménagement. Elles reflètent des problématiques et des enjeux différents mais elles sont également le fruit d'une évolution des mentalités et des techniques au cours des dernières décennies (Miossec, 1995). Dans un premier temps, nous retracerons l'historique de ces aménagements. Dans un second temps, nous analyserons l'efficacité de ces derniers en insistant sur le contexte morphosédimentaire dans lequel ils se situent et son évolution tendancielle.

El proyecto 'sinergias de adaptación-mitigación' busca modos de aprovechar las sinergias entre REDD+ y la adaptación al cambio climático, para asegurar que REDD+ tenga impacto más allá de la mitigación y sea sostenible en un clima cambiante.

La parution du What's the matter with Kansas? (Frank, 2004a) du journaliste états-unien Thomas Frank, puis sa traduction en français sous le titre Pourquoi les pauvres votent à droite (Frank, 2013), ont constitué des événements dans les vies intellectuelles états-unienne et française. L'auteur y dresse le constat d'une forme de réalignement électoral radical, selon lequel aux États-Unis, les citoyens les plus pauvres se seraient mis à voter massivement en faveur...

Pour planter le décor à l'arrière-plan scientifique de ce numéro, il semble important de rappeler que le champ des études électorales françaises est affecté par un usage routinisé - largement lié aux cloisonnements disciplinaires - de certains types de données, la césure historique séparant d'une part les données dites agrégées ou " écologiques " issues des dépouillements effectifs des scrutins, et d'autre part les données dites " individuelles " le plus souvent issues de sondages pré/post-él...

La sociologie et la géographie électorales se différencient notamment par les méthodes qu'elles emploient, renvoyant à un partage plus ou moins implicite des tâches entre le " social " et le " spatial ". Dans cet article, nous suggérons que ces oppositions n'ont pas lieu d'être, et proposons l'utilisation de la modélisation multiniveau comme un lieu de rencontre possible entre géographes et sociologues des élections, permettant de mieux donner corps à la notion de contexte en permettant aux déterminants du vote d'agir de manière différenciée dans l'espace.

Les hydrocarbures aromatiques polycycliques (HAP) sont des contaminants de notre environnement et de notre alimentation. En raison de leur toxicité, la Commission européenne a réglementé leur teneur dans les denrées alimentaires et notamment dans les huiles. Les industriels des corps gras ont donc pour obligation de vérifier la conformité de leurs produits. Dans ce contexte, le groupe Lesieur souhaiterait développer un nouvel outil analytique rapide et portable. Ainsi, ce vaste projet de recherche vise à concevoir un microsystème électrophorétique capable d'analyser les HAP dans les huiles alimentaires. Première étude à s'inscrire dans ce projet, ce travail de thèse a donc consisté à développer de nouveaux protocoles analytiques. Dans une première partie, des méthodes de séparation des HAP ont été développées en électrophorèse capillaire (CE) modifiée par des cyclodextrines couplée à un détecteur de fluorescence induite par laser. En suivant des stratégies multivariées basées sur les plans d'expériences, deux méthodes de séparation ont été optimisées. Les huit HAP communs aux listes établies par l'agence de protection de l'environnement des Etats-Unis et l'agence européenne de sécurité sanitaire des aliments ont été séparés en moins de 7 min et dix-neuf HAP, également classés par ces deux organismes, ont été analysés en moins de 18 min. Ces méthodes de séparation ont été appliquées avec succès à des extraits d'huile dopés. Dans une deuxième partie, il a été question de transférer la méthode d'analyse des huit HAP au format microsystèmes. La principale difficulté rencontrée a été le manque de sensibilité du système de détection couplé aux puces. Le premier objectif a donc été d'optimiser les quantités d'échantillon injectées et les paramètres de la détection avec un composé modèle dans un tampon borate. Cependant, seulement quatre HAP sur les dix-neuf étudiés précédemment en CE ont pu être détectés. Toutefois, dans les conditions optimisées par le plan d'expériences, ils étaient séparés en moins de 4 min. Enfin, différents polymères à empreintes moléculaires (MIP) ont été synthétisés en vue d'extraire sélectivement les HAP des huiles. Après un criblage des conditions de synthèse, la sélectivité de chaque MIP a été évaluée en milieu pur en comparant sa capacité de rétention avec celle d'un polymère non-imprimé. Les huit HAP communs aux deux listes ont finalement pu être extraits sélectivement à partir d'huiles de tournesol, mais avec des rendements d'extraction encore insuffisants et qui nécessitent une amélioration de la procédure d'extraction.

La littérature n'est pas unanime quant à l'effet favorable ou défavorable du changement climatique sur les pays en voie de développement. En effet, si plusieurs travaux concluent que l'agriculture des pays en voie de développement est vulnérable au changement climatique à cause de la prédominance de l'agriculture à faible capital, d'autres suggèrent le contraire. Cette thèse a pour objectif d'étudier l'impact du changement climatique sur l'agriculture tunisienne en utilisant une analyse ricardienne de la valeur ajoutée agricole. Nous effectuons une analyse spatio-temporelle de la réponse de la valeur ajoutée agricole au changement climatique sur une base de données relative à 21 gouvernorats de la Tunisie qui couvre la période 1992-2007. Sur la base des résultats d'estimation de cette analyse, nous procédons à une simulation de l'impact du changement climatique sur l'agriculture relativement aux projections d'un scénario modéré à l'horizon 2020. Les résultats suggèrent que le changement climatique combiné à une évolution du niveau technologique aurait des effets bénéfiques sur l'agriculture tunisienne. Ce résultat remettrait en question les conclusions de la majorité des travaux existants dans la littérature qui insistent sur les effets négatifs du changement climatique sur l'agriculture.

We consider a recently introduced dynamic programming scheme to compute parsimonious evolutionary scenarios for gene adjacencies. We extend this scheme to sample evolutionary scenarios from the whole solution space under the Boltzmann distribution. We apply our algorithms to a dataset of mammalian gene trees and adjacencies, and observe a significant reduction of the number of syntenic inconsistencies observed in the resulting ancestral gene adjacencies.

On April 30th 2008, the journal Nature announced that the missing circuit element, postulated thirty-seven years before by Professor Leon O. Chua has been found. Thus, after the capacitor, the resistor and the inductor, the existence of a fourth fundamental element of electronic circuits called "memristor" was established. In order to point out the importance of such a discovery, the aim of this article is first to propose an overview of the manner with which the three others have been invented during the past centuries. Then, a comparison between the main properties of the singing arc, i.e. a forerunner device of the triode used in Wireless Telegraphy, and that of the memristor will enable to state that the singing arc could be considered as the oldest memristor.

We consider a classical system of n charged particles in an external confining potential, in any dimension larger than 2. The particles interact via pairwise repulsive Coulomb forces and the coupling parameter is of order 1/n (mean-field scaling). By a suitable splitting of the Hamiltonian, we extract the next to leading order term in the ground state energy, beyond the mean-field limit. We show that this next order term, which characterizes the fluctuations of the system, is governed by a new ''renormalized energy'' functional providing a way to compute the total Coulomb energy of a jellium (i.e. an infinite set of point charges screened by a uniform neutralizing background), in any dimension. The renormalization that cuts out the infinite part of the energy is achieved by smearing out the point charges at a small scale, as in Onsager's lemma. We obtain consequences for the statistical mechanics of the Coulomb gas: next to leading order asymptotic expansion of the free energy or partition function, characterizations of the Gibbs measures, estimates on the local charge fluctuations and factorization estimates for reduced densities. This extends results of Sandier and Serfaty to dimension higher than two by an alternative approach.

A new approach called Flow Curvature Method has been recently developed in a book entitled Differential Geometry Applied to Dynamical Systems. It consists in considering the trajectory curve, integral of any n-dimensional dynamical system as a curve in Euclidean n-space that enables to analytically compute the curvature of the trajectory - or the flow. Hence, it has been stated on the one hand that the location of the points where the curvature of the flow vanishes defines a manifold called flow curvature manifold and on the other hand that such a manifold associated with any n-dimensional dynamical system directly provides its slow manifold analytical equation the invariance of which has been proved according to Darboux theory. The Flow Curvature Method has been already applied to many types of autonomous dynamical systems either singularly perturbed such as Van der Pol Model, FitzHugh-Nagumo Model, Chua's Model, ...) or non-singularly perturbed such as Pikovskii-Rabinovich-Trakhtengerts Model, Rikitake Model, Lorenz Model,... More- over, it has been also applied to non-autonomous dynamical systems such as the Forced Van der Pol Model. In this article it will be used for the first time to analytically compute the slow invariant manifold analytical equation of the four-dimensional Unforced and Forced Heartbeat Model. Its slow invariant manifold equation which can be considered as a "state equation" linking all variables could then be used in heart prediction and control according to the strong correspondence between the model and the physiological cardiovascular system behavior.

This study deals with the evolution of the so called "intelligent" networks (insect society without leader, cells of an organism, brain,...) during their apprenticeship period. First we summarize briefly the Version 2 (published in French), whose the main characteristics are: 1) A network connected to its environment is considered as immersed into an information field created by this environment which so dictates to it the apprenticeship constraints. 2) The used formalism draws one's inspiration from the one of the Quantum field theory (Principle of stationary action, gauge fields, invariance by symmetry transformations,...). 3) We obtain Lagrange equations whose solutions describe the network evolution during the whole apprenticeship period. 4) Then, while proceeding with the same formalism inspiration, we suggest other study ways capable of evolving the knowledge in the considered scope. In a second part, after a reminder of the points to be improved, we exhibit the Version 3 which brings, we think, relevant improvements. Indeed: 5) We consider the weighted averages of the variables; this introduces probabilities. 6) We define two observables (L average of information flux and A activity of the network) which could be measured and so be compared with experimental results. 7) We find that L , weighted average of information flows, is an invariant. 8) Finally, we propose two expressions for the conactance, from which we deduce the corresponding Lagrange equations which have to be solved to know the evolution of the considered weighted averages. But, at the present stage, we think that we can progress only by carrying out experiments (see projects like Human brain project) and discovering invariants, symmetries which would allow us, like in Physics, to classify networks and above all to understand better the connections between them. Indeed, and that is what we propose among the future research ways, the underlying problem is to understand how, after their apprenticeship period, several networks can connect together to produce, in the brain case for instance, what we call mental states.

The collective behaviour of soliton ensembles (i.e. the solitonic gas) is studied using the methods of the direct numerical simulation. Traditionally this problem was addressed in the context of integrable models such as the celebrated KdV equation. We extend this analysis to non-integrable KdV-BBM type models. Some high resolution numerical results are presented in both integrable and nonintegrable cases. Moreover, the free surface elevation probability distribution is shown to be quasi-stationary. Finally, we employ the asymptotic methods along with the Monte-Carlo simulations in order to study quantitatively the dependence of some important statistical characteristics (such as the kurtosis and skewness) on the Stokes-Ursell number (which measures the relative importance of nonlinear effects compared to the dispersion) and also on the magnitude of the BBM term.

During the last decades, it had been highlighted the duality between chaotic numbers and pseudo-random numbers. Emergence of pseudo-randomness from chaos via various under-sampling methods has been recently discovered. Instead of opposing both qualities (chaos and pseudo-randomness) of numbers, it should be more interesting to shape mixed Chaotic/Pseudo-random number generators, which can modulate the desired properties between chaos and pseudo-randomness. Because nowadays there exist increasing demands for new and more efficient number generators of this type it is important to develop new tools to shape more or less automatically various families of such generators. Mathematical chaotic circuits have been recently introduced for such a purpose among several others. There some analogy between them and electric circuits, but the components. Mathematical circuits use new ones we describe therein. The combination of such mathematical components leads to several news applications which improve the performance of well known chaotic attractors (Hénon, Chua, Lorenz, Rössler, ...). They can be also used in larger scale to shape numerous architectures of mixed Chaotic/Pseudo Random Number Generators.

We consider a classical system of n charged particles in an external confining potential, in any dimension larger than 2. The particles interact via pairwise repulsive Coulomb forces and the coupling parameter is of order 1/n (mean-field scaling). By a suitable splitting of the Hamiltonian, we extract the next to leading order term in the ground state energy, beyond the mean-field limit. We show that this next order term, which characterizes the fluctuations of the system, is governed by a new ''renormalized energy'' functional providing a way to compute the total Coulomb energy of a jellium (i.e. an infinite set of point charges screened by a uniform neutralizing background), in any dimension. The renormalization that cuts out the infinite part of the energy is achieved by smearing out the point charges at a small scale, as in Onsager's lemma. We obtain consequences for the statistical mechanics of the Coulomb gas: next to leading order asymptotic expansion of the free energy or partition function, characterizations of the Gibbs measures, estimates on the local charge fluctuations and factorization estimates for reduced densities. This extends results of Sandier and Serfaty to dimension higher than two by an alternative approach.

The paper deals with the accuracy of guaranteed error bounds on outputs of interest computed from approximate methods such as the finite element method. A considerable improvement is introduced for linear problems thanks to new bounding techniques based on Saint-Venant's principle. The main breakthrough of these optimized bounding techniques is the use of properties of homothetic domains which enables to cleverly derive guaranteed and accurate boundings of contributions to the global error estimate over a local region of the domain. Performances of these techniques are illustrated through several numerical experiments.

In the context of global/goal-oriented error estimation applied to computational mechanics, the need to obtain reliable and guaranteed bounds on the discretization error has motivated the use of residual error estimators. These estimators require the construction of admissible stress fields verifying the equilibrium exactly. This article focuses on a recent method, based on a flux-equilibration procedure and called the element equilibration + star-patch technique (EESPT), that provides for such stress fields. The standard version relies on a strong prolongation condition in order to calculate equilibrated tractions along finite element boundaries. Here, we propose an enhanced version, which is based on a weak prolongation condition resulting in a local minimization of the complementary energy and leads to optimal tractions in selected regions. Geometric and error estimate criteria are introduced to select the relevant zones for optimizing the tractions. We demonstrate how this optimization procedure is important and relevant to produce sharper estimators at affordable computational cost, especially when the error estimate criterion is used. Two- and three-dimensional numerical experiments demonstrate the efficiency of the improved technique.

We study a mean-field version of rank-based models of equity markets such as the Atlas model introduced by Fernholz in the framework of Stochastic Portfolio Theory. We obtain an asymptotic description of the market when the number of companies grows to infinity. Then, we discuss the long-term capital distribution. We recover the Pareto-like shape of capital distribution curves usually derived from empirical studies, and provide a new description of the phase transition phenomenon observed by Chatterjee and Pal. Finally, we address the performance of simple portfolio rules and highlight the influence of the volatility structure on the growth of portfolios.

In the deterministic context a series of well established results allow to reformulate delay differential equations (DDEs) as evolution equations in infinite dimensional spaces. Several models in the theoretical economic literature have been studied using this reformulation. On the other hand, in the stochastic case only few results of this kind are available and only for specific problems. The contribution of the present letter is to present a way to reformulate in infinite dimension a prototype controlled stochastic DDE, where the control variable appears delayed in the diffusion term. As application, we present a model for quadratic risk minimization hedging of European options with execution delay and a time-to-build model with shock. Some comments concerning the possible employment of the dynamic programming after the reformulation in infinite dimension conclude the letter.

CETTE ETUDE EST UN ESSAI D'INTERPRETATION DE L'EFFET DE LA FLUCTUATION DES RECETTES EN DEVISES SUR UNE ECONOMIE EN DEVELOPPEMENT LORSQUE L'ETAT EST L'ACTEUR PRINCIPAL. NOTRE HYPOTHESE EST QUE CES VARIATIONS DETERMINENT L'ACTION PUBLIQUE DU FAIT DE L'INFLUENCE DE L'ETAT ET DE L'IMPACT POSITIF DES RECETTES SUR SON INTERVENTION. POUR ECLAIRER CE PROBLEME THEORIQUE, NOUS AVONS CHOISI L'EGYPTE CAR LESAPPORTS EXTERNES (RECETTES PETROLIERES ET DU CANAL DE SUEZ, REMISES DES TRAVAILLEURS EMIGRES ET AIDE PUBLIQUE ETRANGERE) SONT IMPORTANTS : ILS AUGMENTENT RAPIDEMENT DE 1974 A 1980, DIMINUENT JUSQU'EN 1996. ENFIN, L'ACTION PUBLIQUE EST SUBSTANTIELLE. NOTRE THESE COMPREND TROIS PARTIES. NOUS COMMENCONS PAR UNE ETUDE DE DEUX ANALYSES THEORIQUES : LE MODELE DU SYNDROME HOLLANDAIS ET LA THEORIE DES ECONOMIES RENTIERES. SELON LA PREMIERE, INSCRITE DANS LE PARADIGME NEOCLASSIQUE, L'ESSOR DES RECETTES EXTERNES MODIFIE LA REPARTITION DES RESSOURCES LOCALES ENTRE LES SECTEURS. ON DISTINGUE DEUX SECTEURS DES BIENS ECHANGEABLES (LE PREMIER EN ESSOR ET LE SECOND EN RETARD) ET LE SECTEUR DES BIENS NON ECHANGEABLES (OU ABRITES). ENFIN, LE RAPPORT DES PRIX RELATIFS VARIE A COURT ET MOYEN TERME. LA SECONDE THEORIE CONCERNE LES ECONOMIES DEPENDANT DE RENTES VOLUMINEUSES - REVENUS MOBILISANT PEU LES FACTEURS DE PRODUCTION. L'ETAT, SI IL EN EST LE PREMIER DESTINATAIRE, EST DIT RENTIER. SES MOYENS BUDGETAIRES CROISSANTS PERMETTENT D'ABORD DE CONSOLIDER SON POUVOIR POLITIQUE ET ENSUITE, DE SOUTENIR L'INDUSTRIALISATION. DANS UN SECOND TEMPS, NOUS PRESENTONS LES RENTES DE L'ECONOMIE EGYPTIENNE. ENFIN, LECAS EGYPTIEN PERMET DE CARACTERISER DES ECONOMIES RENTIERES. LES RECETTES EN DEVISES SONT PARTICULIEREMENT ELEVEES DE 1974 A 1985, L'ACTION PUBLIQUE EST RENFORCEE JUSQU'EN 1985 MAIS LA DIVERSIFICATION VERS L'INDUSTRIE ET L'AGRICULTURE FAIBLE. DEPUIS 1986, LE NIVEAU PLUS FAIBLE DES APPORTS EXTERNES CONDUIT A UNE NOUVELLE FORME D'INTERVENTION DE L'ETAT, LAISSANT PLUS DE PLACE AU SECTEUR PRIVE. PAR CONTRE, LE SYNDROME HOLLANDAIS EST PEU VISIBLE SUR LA PERIODE CAR LES MECANISMES ETATIQUES PRIMENT LES MECANISMES MARCHANDS. LES ECONOMIES RENTIERES SONT VULNERABLES CAR L'ESSOR DES RECETTES EN DEVISES ENTRAINE UNE DEPENDANCE FINANCIERE NUISIBLE A LA CROISSANCE ET QUE LA PREOCCUPATION DES DIRIGEANTS EST MOINS ECONOMIQUE QUE POLITIQUE.

Imaging polarimetry is an important tool for the study of cosmic magnetic fields. In our Galaxy, polarization levels of a few up to ∼10% are measured in the submillimeter dust emission from molecular clouds and in the synchrotron emission from supernova remnants. Only few techniques exist to image the distribution of polarization angles, tracing the plane-of-sky projection of the magnetic field orientation. At submillimeter wavelengths, polarization is either measured as the differential total power of polarization-sensitive bolometer elements, or by modulating the polarization of the signal. Bolometer arrays such as LABOCA at the APEX telescope are used to observe the continuum emission from fields as large as ∼ 0.2 deg. Here we present the results from the commissioning of PolKa, a reflection-type polarimeter for LABOCA. The waveplate has a good efficiency of at least 90%. The modulation efficiency depends mainly on the sampling and on the angular velocity of the waveplate. For the data analysis the concept of generalized synchronous demodulation is introduced. The instrumental polarization towards a point source is at the level of ∼ 0.1%, increasing to a few percent at the −10 db contour of the main beam. A method to correct for its effect in observations of extended sources is presented. Our map of the polarized synchrotron emission from the Crab nebula is in agreement with structures observed at radio and optical wavelengths. The linear polarization measured in OMC1 agrees with results from previous studies, while the high sensitivity of LABOCA enables us to also map the polarized emission of the Orion Bar, a prototypical photon-dominated region.

We {\bf continue the study of the impact from} baryon physics on the small scale problems of the $\Lambda$CDM model{\bf, based on a semi-analytical model (Del Popolo, 2009). With such model, we show how the cusp/core, missing satellite (MSP), Too Big to Fail (TBTF) problems} and the angular momentum catastrophe can be reconciled with observations{\bf, adding parent-sattelite interaction. Such interaction} between dark matter (DM) and baryons through dynamical friction (DF) can {\bf sufficiently} flatten the inner cusp of the density profiles to solve the cusp/core problem. {\bf Combining, in our model, a Zolotov et al. (2012)-like correction, similarly to Brooks et al. (2013), and effects of UV heating and tidal stripping, the number of massive, luminous satellites, as seen in the Via Lactea 2 (VL2) subhaloes,} is in agreement with the numbers observed in the MW, thus resolving the MSP and TBTF problems. The model also produces {\bf a distribution of the angular spin parameter and angular momentum in agreement with observations of the dwarfs studied by van den Bosch, Burkert, \& Swaters (2001).}

Knowledge of the soil water retention curve (SWRC) is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs), which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972), Lal (1979), Aina & Periaswamy (1985), Arruda et al. (1987), Dijkerman (1988), Vereecken et al. (1989), Batjes (1996), van den Berg et al. (1997), Tomasella et al. (2000), Hodnett & Tomasella (2002), Oliveira et al. (2002), and Barros (2010). We used a database that includes soil texture (sand, silt, and clay), bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000) achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980) model, especially when tested in the top soil layer. O conhecimento da curva de retenção de água (CRA) é essencial para compreender e modelar os processos hidráulicos no solo. No entanto, a determinação direta do CRA consome tempo, e o custo é alto. Além disso, é necessário grande número de amostras, em razão da elevada variabilidade espacial e temporal das propriedades hidráulicas do solo. Uma alternativa é o uso de modelos, que são chamados de funções de pedotransferência (FPT), que estimam a CRA por meio de propriedades do solo de fácil determinação. O objetivo deste estudo foi testar a acurácia de 16 FPT, pontuais ou paramétricas, existentes na literatura, em diferentes solos do sul e sudeste do Estado do Pará, Brasil. As FPT testadas foram propostas por Pidgeon (1972), Lal (1979), Aina & Periaswamy (1985), Arruda et al. (1987), Dijkerman (1988), Vereecken et al. (1989), Batjes (1996), van den Berg et al. (1997), Tomasella et al. (2000), Hodnett & Tomasella (2002), Oliveira et al. (2002) e Barros (2010). Utilizou-se um banco de dados contendo textura (areia, silte e argila), densidade do solo, carbono orgânico, pH do solo, capacidade de troca catiônica e CRA. A maioria das FPT testadas não demonstrou boa acurácia para estimar as CRA. As FPT paramétricas apresentaram melhor desempenho do que as FPT pontuais em estimar a CRA dos solos na região. Entre as FPT paramétricas, as propostas por Tomasella et al. (2000) obtiveram melhor acurácia em estimar os parâmetros empíricos do modelo de van Genuchten (1980), principalmente, quando testadas na primeira camada do solo.

The inclusion-exclusion principle is a well-known property in probability theory, and is instrumental in some computational problems such as the evaluation of system reliability or the calculation of the probability of a Boolean formula in diagnosis. However, in the setting of uncertainty theories more general than probability theory, this principle no longer holds in general. It is therefore useful to know for which families of events it continues to hold. This paper investigates this question in the setting of belief functions. After exhibiting original sufficient and necessary conditions for the principle to hold, we illustrate its use on the uncertainty analysis of Boolean and non-Boolean systems in reliability.

Copulas are a useful tool to model multivariate distributions. While there exist various families of bivariate copulas, the construction of flex- ible and yet tractable copulas suitable for high-dimensional applications is much more challenging. This is even more true if one is concerned with the analysis of extreme values. In this paper, we construct a class of one-factor copulas and a family of extreme-value copulas well suited for high-dimensional applications and exhibiting a good balance between tractability and flexibility. The inference for these copulas is performed by using a least-squares estimator based on dependence coefficients. The modeling capabilities of the copulas are illustrated on simulated and real datasets.

In this paper, we address the issue of estimating the parameters of general multivariate copulas, that is, copulas whose partial derivatives may not exist. To this aim, we consider a weighted least-squares estimator based on dependence coefficients, and establish its consistency and asymptotic normality. The estimator's performance on finite samples is illustrated on simulations and a real dataset.

For contributors
The deposit of a document requires the agreement of all its authors, and it must respect editor policy
A submitted document passed a moderation process. It can be rejected if it does not fullfill HAL criteria (see contributor guide )
Once a document is put online, it cannot be withdrawn
Refer to the manuHAL
For readers
Within the context of electronic communication, rules about intellectual property do apply. In particular, authors must be correctly recognized as such, and their work must be cited if used.
Terms of Use
HAL metadata may be totally or in part browsed by OAI-PMH harvesting ;
No commercial use of the extracted data ;
The source must be cited (eg hal.archives-ouvertes.fr/hal-00000001).
Submit

The support and moderation services will work in a reduced way during the school holidays