L'archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion d'articles scientifiques de niveau recherche, publiés ou non, et de thèses, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Derniers Dépôts
Chimie

Économie et finance quantitative

Informatique

Mathématiques

Physique

Planète et Univers

Science non linéaire

Sciences cognitives

Sciences de l'environnement

Sciences de l'Homme et Société

Sciences de l'ingénieur

Sciences du Vivant

Statistiques

Favored theories of giant planet formation center around two main paradigms, namely the core accretion model and the gravitational instability model. These two formation scenarios support the hypothesis that the giant planet metallicities should be higher or equal to that of the parent star. Meanwhile, spectra of the transiting hot Jupiter HD189733b suggest that carbon and oxygen abundances range from depleted to enriched with respect to the star. Here, using a model describing the formation sequence and composition of planetesimals in the protoplanetary disk, we determine the range of volatile abundances in the envelope of HD189733b that is consistent with the 20-80 M⊕ of heavy elements estimated to be present in the planet's envelope. We then compare the inferred carbon and oxygen abundances to those retrieved from spectroscopy, and we find a range of supersolar values that directly fit both spectra and internal structure models. In some cases, we find that the apparent contradiction between the subsolar elemental abundances and the mass of heavy elements predicted in HD189733b by internal structure models can be explained by the presence of large amounts of carbon molecules in the form of polycyclic aromatic hydrocarbons and soots in the upper layers of the envelope, as suggested by recent photochemical models. A diagnostic test that would confirm the presence of these compounds in the envelope is the detection of acetylene. Several alternative hypotheses that could also explain the subsolar metallicity of HD189733b are formulated: the possibility of differential settling in its envelope, the presence of a larger core that did not erode with time, a mass of heavy elements lower than the one predicted by interior models, a heavy element budget resulting from the accretion of volatile-poor planetesimals in specific circumstances, or the combination of all these mechanisms.

A luminescent liquid crystalline compound containing a bulky dispiro[fluorene-9,11′-indeno[1,2-b]fluorene-12′,9′′-fluorene] has been designed and synthesized by di-substitution of a bromo derivative with N-(4-ethynylphenyl)-3,4,5-tris(hexadecyloxy)benzamide fragments. This di-substituted 3π-2spiro derivative forms stable and well-organized mesophases over large temperature ranges. Combination of DSC, POM and SAXS analyses has revealed the formation of a lamellar mesophase between 60 and 150 °C followed by another mesophase with a 2-dimensional lattice of rectangular symmetry that remains up to the isotropization point near 225 °C. In the original molecular packing model deduced from SAXS, the tert-butyl terminal groups fill the centre of hollow columns constituted by both the dihydro(1,2-b)indenofluorene and benzamide fragments and separated from each other by the surrounding aliphatic tails. The merging of the columns yielding the lamellar phase turned out to be governed by the dynamics of both, the micro-phase segregation process and the network of hydrogen bonds. In the various mesomorphic states and in solution, a strong luminescence was observed. The emission spectrum however depends on temperature and drastically changes between both mesophases and the isotropic liquid. In particular, a strong modulation of the emission wavelength occurs at the isotropic to 2D phase transition. This luminescence modulation results from an enhanced contribution of the vibronic peaks at higher energies in the emission profile. The compound was also found to be soluble in 5CB and was integrated in a guest-host LC cell, allowing efficient modulation of the photoluminescence polarization, in the presence or absence of an electrical field.

Dans le cadre de la protection de l'environnement et du consommateur, CTC effectue des tests enchimie analytique sur de nombreux paramètres en matrices aqueuses, cuir et textile. Les nouvelles substancesmises sur le marché ainsi que les réglementations évoluant sans cesse, le développement de nouvellesméthodes d'analyses est donc nécessaire.Plusieurs méthodes analytiques ont ainsi été développées. Pour l'analyse des rejets d'effluents industriels desinstallations classées et pour l'analyse d'innocuité de produits utilisant le cuir ou le textile (chaussures,maroquinerie, prêt-à-porter...).Les chloroalcanes ont été dosés en chromatographie gazeuse (GC) associée à la spectrométrie de masse (MS)utilisant l'ionisation chimique, à la fois en matrices aqueuses (limite de quantification, LQ, à 0,6 μg/l) et sur lescuirs (LQ à 2 mg/kg).Une analyse des alkylphénols et alkylphénols ethoxylates a été développée pour les matrices aqueuses parGC/MS (LQ à 0,05 μg/l).Plusieurs familles de retardateurs de flammes ont ensuite été étudiées. Les polybromodiphénylethers peuventêtre dosés dans les eaux (LQ<0,05 µl) et le cuir (LQ <= µg/kg), par GC/MS en ionisationchimique.L'hexabromocyclododécane et des organophosphates, par chromatographie liquide et spectrométrie de masseen tandem pour des matrices textiles (LQ à 6 mg/kg). Des hydrocarbures aromatiques polycycliques dans le cuir ont ensuite été analysés par GC/MS-MS (LQ à 250 μg/kg).Enfin, une méthode multirésidus portant sur plusieurs familles de micropolluants organiques a été mise aupoint en GC/MS pour les rejets d'effluents (LQ <0,1 µg/l)

Chemical flooding is currently one of the most promising solution to increase the recovery of mature reservoirs. In Surfactant-Polymer (SP) processes, several parameters should be taken into account to estimate the return on investments: concentrations of the injected chemical species, slug sizes, initiation times, residual oil saturation, adsorption rates of the chemical species on the rock, etc. Some parameters are design parameters whereas other ones are uncertain. For operators, defining the optimal values of the first ones while considering the uncertainties related to the second ones, is not an easy task in practice. This work proposes a methodology to help handle this problem. Starting from a synthetic reservoir test case where an SP process is set up, we select design and uncertain parameters which may impact the production. In the reservoir simulator, for the sake of flexibility, some of them are tabulated functions, which enables the user to input any data coming from any system. However, point-wise modifications of these curves would soar the number of parameters. Therefore, a particular parameterization is introduced. We then propose a methodology based on Response-SurfaceModeling (RSM) to first approximate the oil production computed by a reservoir simulator for different values of our parameters and identify the most influential ones. This RSM is based on a Karhunen-Loe've decomposition of the time response of the reservoir simulator and on an approximation of the components of this decomposition by a Gaussian process. This technique allows us to obtain substantial savings of computation times when building the response surfaces. Once a good predictability is achieved, the surfaces are used to optimize the design of the SP process, taking economic parameters and uncertainties on the data into account without additional reservoir simulations.

The recent financial crisis has highlighted the necessity to introduce mixtures of probability distributions in order to improve the estimation of asset returns and in particular to better take account of risks. Since Pearson (1894), these mixtures have been intensively used in many scientific fields since they provide very convenient mathematical tools to examine various statistical data and to approximate many probability distributions. They are typically introduced to model the choice of probability distributions among a given parametric family. The coefficients of the mixture usually correspond to the relative frequencies of each possible parameter. In this framework, we examine the single-period portfolio choice model, which has been addressed in the partial equilibrium framework, by Brennan and Solanki (1981), Leland (1980) and Prigent (2006). We consider an investor who wants to maximize the expected utility of the value of his portfolio consisting of one risk-free asset and one risky asset. We provide and analyze the solution for log return with mixture distributions, in particular for the mixture Gaussian case. The optimal portfolio is characterized for arbitrary utility functions. Our results show that mixture of distributions can have significant implications on the portfolio management.

Causation between time series is a most important topic in econometrics, financial engineering, biological and psychological sciences, and many other fields. A new setting is introduced for examining this rather abstract concept. The corresponding calculations, which are much easier than those required by the celebrated Granger-causality, do not necessitate any deterministic or probabilistic modeling. Some convincing computer simulations are presented.

We propose an analytic approach to the frequency bandwidth dimensioning problem, faced by cellular network operators who deploy/upgrade their networks in various geographical regions (countries) with an inhomogeneous urbanization. We present a model allowing one to capture fundamental relations between users' quality of service parameters (mean downlink throughput), traffic demand, the density of base station deployment, and the available frequency bandwidth. These relations depend on the applied cellular technology (3G or 4G impacting user peak bit-rate) and on the path-loss characteristics observed in different (urban, sub-urban and rural) areas. We observe that if the distance between base stations is kept inversely proportional to the distance coefficient of the path-loss function, then the performance of the typical cells of these different areas is similar when serving the same (per-cell) traffic demand. In this case, the frequency bandwidth dimensioning problem can be solved uniformly across the country applying the mean cell approach proposed in [Blaszczyszyn et al. WiOpt2014] http://dx.doi.org/10.1109/WIOPT.2014.6850355 . We validate our approach by comparing the analytical results to measurements in operational networks in various geographical zones of different countries.

Existing work on privacy by design mostly focus on technologies rather than methodologies and on components rather than architectures. In this paper, we advocate the idea that privacy by design should also be addressed at the architectural level and be associated with suitable methodologies. Among other benefits, architectural descriptions enable a more systematic exploration of the design space. In addition, because privacy is intrinsically a complex notion that can be in tension with other requirements, we believe that formal methods should play a key role in this area. After presenting our position, we provide some hints on how our approach can turn into practice based on ongoing work on a privacy by design environment.

Adaptive radiation therapy aims at compensating anatomical variations during a radiotherapy course by modifying the treatment plan accordingly. Therefore regions of interest (ROIs) have to be defined in the most recent imaging data. We investigated three different non‐rigid image registration algorithms to automatically propagate ROIs from an initial planning CT to a pre‐treatment CT. Copyright 2013 American Association of Physicists in Medicine. The electronic version of this paper can be found at the following location: http://dx.doi.org/10.1118/1.4814280.

In this paper we derive a two-component system of nonlinear equations which model two-dimensional shallow water waves with constant vorticity. Then we prove well-posedness of this equation using a geometrical framework which allows us to recast this equation as a geodesic flow on an infinite dimensional manifold. Finally, we provide a criteria for global existence.

Let $M$ be a complete non-compact Riemannian manifold satisfying the doubling volume property. Let $\overrightarrow{\Delta}$ be the Hodge-de Rham Laplacian acting on $1$-differential forms. According to the Bochner formula, $\overrightarrow{\Delta}=\nabla^*\nabla+R_+-R_-$ where $R_+$ and $R_-$ are respectively the positive and negative part of the Ricci curvature and $\nabla$ is the Levi-Civita connection. We study the boundedness of the Riesz transform $d^*(\overrightarrow{\Delta})^{-\frac{1}{2}}$ from $L^p(\Lambda^1T^*M)$ to $L^p(M)$ and of the Riesz transform $d(\overrightarrow{\Delta})^{-\frac{1}{2}}$ from $L^p(\Lambda^1T^*M)$ to $L^p(\Lambda^2T^*M)$. We prove that, if the heat kernel on functions $p_t(x,y)$ satisfies a Gaussian upper bound and if the negative part $R_-$ of the Ricci curvature is $\epsilon$-sub-critical for some $\epsilon\in[0,1)$, then $d^*(\overrightarrow{\Delta})^{-\frac{1}{2}}$ is bounded from $L^p(\Lambda^1T^*M)$ to $L^p(M)$ and $d(\overrightarrow{\Delta})^{-\frac{1}{2}}$ is bounded from $L^p(\Lambda^1T^*M)$ to $L^p(\Lambda^2T^* M)$ for $p\in(p_0',2]$ where $p_0>2$ depends on $\epsilon$ and on a constant appearing in the doubling volume property. A duality argument gives the boundedness of the Riesz transform $d(\Delta)^{-\frac{1}{2}}$ from $L^p(M)$ to $L^p(\Lambda^1T^*M)$ for $p\in [2,p_0)$ where $\Delta$ is the non-negative Laplace-Beltrami operator. We also give a condition on $R_-$ to be $\epsilon$-sub-critical under both analytic and geometric assumptions.

An identity in law for the area of a spectrally positive Lévy stable process stopped at zero is established. Extending that of Lefebvre for Brownian motion, it involves an inverse Beta random variable and the square of a positive stable random variable. This identity entails that the stopped area is distributed as the perpetuity of a spectrally negative Lévy process, and is hence self-decomposable. We also derive a convergent series representation for the density, whose behaviour at zero is shown to be Fréchet-like.

We study arrays of disordered nanowires arranged in parallel and in contact between two metallic electrodes. If one adjusts with a back gate the chemical potential to one edge of the impurity band, these arrays open promising perspectives for energy harvesting and heat management in a temperature range where the electrons propagate inside the nanowires by phonon assisted hops between localized states. On one hand, the thermopower self-averages to large values while the electrical conductance scales with the number of nanowires. This gives large power factors and suitable figures of merit for the thermoelectric conversion. On the other hand, the phonons are mainly absorbed near one electrode and emitted near the other. This phenomenon can be exploited for cooling hot spots in electronic circuits.

This paper deals with the integrated thermal modelling of photovoltaic panels for the solar protection of buildings under strong solar radiation as encountered in tropical and humid conditions. The thermal model is integrated in a building simulation code and is able to predict the thermal impact of PV panels installed on buildings in several configurations and also their electric production. Basically, the PV panel is considered as a complex wall within which coupled heat transfer occur. Conduction, convection and radiation heat transfer equations are solved to simulate the global thermal behaviour of the building envelope including the PV panels. The model is first detailed, with a focus on the radiation modelling within the semi-transparent layers of the panels and then preliminary results are presented in terms of verification. Conclusions are finally drawn regarding the impact of the panels in terms of thermal insulation for summer tropical conditions.

This paper deals with the isotropic realizability of a given regular divergence free field j in R^3 as a current field, namely to know when j can be written as sigma Du for some isotropic conductivity sigma, and some gradient field Du. The local isotropic realizability in R^3 is obtained by Frobenius' theorem provided that j and curl j are orthogonal in R^3. A counter-example shows that Frobenius' condition is not sufficient to derive the global isotropic realizability in R^3. However, assuming that (j, curl j, j x curl j) is an orthogonal basis of R^3, an admissible conductivity sigma is constructed from a combination of the three dynamical flows along the directions j/|j|, curl j/|curl j| and (j/|j|^2) x curl j. When the field j is periodic, the isotropic realizability in the torus needs in addition a boundedness assumption satisfied by the flow along the third direction (j/|j|^2) x \curl j. Several examples illustrate the sharpness of the realizability conditions.

Rapide historique de la géologie à la Sorbonne de 1809 à 1969, présenté à travers l'histoire des chaires. La fin de la période étudiée correspond au transfert des laboratoires de la Faculté des Sciences de Paris de la Sorbonne dans les bâtiments nouvellement construits sur l'ancienne "Halle aux vins", entre la place Jussieu et le quai Saint-Bernard.

This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).

Favored theories of giant planet formation center around two main paradigms, namely the core accretion model and the gravitational instability model. These two formation scenarios support the hypothesis that the giant planet metallicities should be higher or equal to that of the parent star. Meanwhile, spectra of the transiting hot Jupiter HD189733b suggest that carbon and oxygen abundances range from depleted to enriched with respect to the star. Here, using a model describing the formation sequence and composition of planetesimals in the protoplanetary disk, we determine the range of volatile abundances in the envelope of HD189733b that is consistent with the 20-80 M⊕ of heavy elements estimated to be present in the planet's envelope. We then compare the inferred carbon and oxygen abundances to those retrieved from spectroscopy, and we find a range of supersolar values that directly fit both spectra and internal structure models. In some cases, we find that the apparent contradiction between the subsolar elemental abundances and the mass of heavy elements predicted in HD189733b by internal structure models can be explained by the presence of large amounts of carbon molecules in the form of polycyclic aromatic hydrocarbons and soots in the upper layers of the envelope, as suggested by recent photochemical models. A diagnostic test that would confirm the presence of these compounds in the envelope is the detection of acetylene. Several alternative hypotheses that could also explain the subsolar metallicity of HD189733b are formulated: the possibility of differential settling in its envelope, the presence of a larger core that did not erode with time, a mass of heavy elements lower than the one predicted by interior models, a heavy element budget resulting from the accretion of volatile-poor planetesimals in specific circumstances, or the combination of all these mechanisms.

A fast and accurate texture recognition system is presented. The new approach consists in extracting locally and globally invariant representations. The locally invariant representation is built on a multi-resolution convolutional net- work with a local pooling operator to improve robustness to local orientation and scale changes. This representation is mapped into a globally invariant descriptor using multifractal analysis. We propose a new multifractal descriptor that cap- tures rich texture information and is mathematically invariant to various complex transformations. In addition, two more techniques are presented to further im- prove the robustness of our system. The first technique consists in combining the generative PCA classifier with multiclass SVMs. The second technique consists of two simple strategies to boost classification results by synthetically augment- ing the training set. Experiments show that the proposed solution outperforms existing methods on three challenging public benchmark datasets, while being computationally efficient.

The derivation of Debye shielding and Landau damping from the $N$-body description of plasmas is performed directly by using Newton's second law for the $N$-body system. This is done in a few steps with elementary calculations using standard tools of calculus, and no probabilistic setting. Unexpectedly, Debye shielding is encountered together with Landau damping. This approach is shown to be justified in the one-dimensional case when the number of particles in a Debye sphere becomes large. The theory is extended to accommodate a correct description of trapping and chaos due to Langmuir waves. Shielding and collisional transport are found to be two related aspects of the repulsive deflections of electrons, in such a way that each particle is shielded by all other ones while keeping in uninterrupted motion.

Global modeling aims to build mathematical models of concise description. Polynomial Model Search (PoMoS) and Global Modeling (GloMo) are two complementary algorithms (freely downloadable at the following address: http://www.cesbio.ups-tlse.fr/us/pomos_et_glomo.html) designed for the modeling of observed dynamical systems based on a small set of time series. Models considered in these algorithms are based on ordinary differential equations built on a polynomial formulation. More specifically, PoMoS aims at finding polynomial formulations from a given set of 1 to N time series, whereas GloMo is designed for single time series and aims to identify the parameters for a selected structure. GloMo also provides basic features to visualize integrated trajectories and to characterize their structure when it is simple enough: One allows for drawing the first return map for a chosen Poincare section in the reconstructed space; another one computes the Lyapunov exponent along the trajectory. In the present paper, global modeling from single time series is considered. A description of the algorithms is given and three examples are provided. The first example is based on the three variables of the Rossler attractor. The second one comes from an experimental analysis of the copper electrodissolution in phosphoric acid for which a less parsimonious global model was obtained in a previous study. The third example is an exploratory case and concerns the cycle of rainfed wheat under semiarid climatic conditions as observed through a vegetation index derived from a spatial sensor.

In this paper we examined plan continuation error (PCE), a well known error made by pilots consisting in continuing the flight plan despite adverse meteorological conditions. Our hypothesis is that a large range of strong negative emotional consequences, including those induced by economic pressure, are associated with the decision to revise the flight plan and favor PCE. We investigated the economic hypothesis with a simplified landing task (reproduction of a real aircraft instrument) in which uncertainty and reward were manipulated. Heart rate (HR), heart rate variability (HRV) and eye tracking measurements were performed to get objective clues both on the cognitive and emotional state of the volunteers. Results showed that volunteers made more risky decisions under the influence of the financial incentive, in particular when uncertainty was high. Psychophysiological examination showed that HR increased and total HRV decreased in response to the cognitive load generated by the task. In addition, HR also increased in response to the financially motivated condition. Eventually, fixation times increased when uncertainty was high, confirming the difficulty in obtaining/interpreting information from the instrument in this condition. These results support the assumption that risky-decision making observed in pilots can be, at least partially, explained by a shift from cold to hot (emotional) decision-making in response to economic constraints and uncertainty.

Standardized neurofeedback (NF) protocols have been extensively evaluated in attention-deficit/hyperactivity disorder (ADHD). However, such protocols do not account for the large EEG heterogeneity in ADHD. Thus, individualized approaches have been suggested to improve the clinical outcome. In this direction, an open-label pilot study was designed to evaluate a NF protocol of relative upper alpha power enhancement in fronto-central sites. Upper alpha band was individually determined using the alpha peak frequency as an anchor point. 20 ADHD children underwent 18 training sessions. Clinical and neurophysiological variables were measured pre- and post-training. EEG was recorded pre- and post-training, and pre- and post-training trials within each session, in both eyes closed resting state and eyes open task-related activity. A power EEG analysis assessed long-term and within-session effects, in the trained parameter and in all the sensors in the (1-30) Hz spectral range. Learning curves over sessions were assessed as well. Parents rated a clinical improvement in children regarding inattention and hyperactivity/impulsivity. Neurophysiological tests showed an improvement in working memory, concentration and impulsivity (decreased number of commission errors in a continuous performance test). Relative and absolute upper alpha power showed long-term enhancement in task-related activity, and a positive learning curve over sessions. The analysis of within-session effects showed a power decrease ("rebound" effect) in task-related activity, with no significant effects during training trials. We conclude that the enhancement of the individual upper alpha power is effective in improving several measures of clinical outcome and cognitive performance in ADHD. This is the first NF study evaluating such a protocol in ADHD. A controlled evaluation seems warranted due to the positive results obtained in the current study.

Extending and modifying his domain of life by artifact production is one of the main characteristics of humankind. From the first hominid, who used a wood stick or a stone for extending his upper limbs and augmenting his gesture strength, to current systems engineers who used technologies for augmenting human cognition, perception and action, extending human body capabilities remains a big issue. From more than fifty years cybernetics, computer and cog-nitive sciences have imposed only one reductionist model of human machine systems: cognitive systems. Inspired by philosophy, behaviorist psychology and the information treatment metaphor, the cognitive system paradigm requires a function view and a functional analysis in human systems design process. Ac-cording that design approach, human have been reduced to his metaphysical and functional properties in a new dualism. Human body requirements have been left to physical ergonomics or "physiology". With multidisciplinary con-vergence, the issues of "human-machine" systems and "human artifacts" evolve. The loss of biological and social boundaries between human organisms and in-teractive and informational physical artifact questions the current engineering methods and ergonomic design of cognitive systems. New developpment of human machine systems for intensive care, human space activities or bio-engineering sytems requires grounding human systems design on a renewed epistemological framework for future human systems model and evidence based "bio-engineering". In that context, reclaiming human factors, augmented human and human machine nature is a necessity

This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).

L'augmentation importante de la population mondiale, et par conséquent de ses besoins, exerce une pression de plus en plus importante sur les surfaces forestières. L'outil le mieux adapté au suivi des forêts, à l'échelle du globe, est la télédétection satellitaire. C'est dans ce contexte que se situe ce travail de thèse, qui vise à améliorer l'estimation des paramètres biophysiques des forêts à partir de données de télédétection. L'originalité de ce travail a été d'étudier cette estimation des paramètres biophysiques en menant plusieurs études de sensibilité avec une démarche expérimentale et sur des données simulées. Tout d'abord, l'étude a porté sur des séries temporelles de mesures de diffusiométrie radar obtenues sur deux sites : l'un constitué d'un cèdre en zone tempérée et l'autre d'une parcelle de forêt tropicale. Cette étude de sensibilité a été poursuivie en imageant, avec une résolution élevée, plusieurs parcelles aux configurations différentes à l'intérieur d'une forêt de pin. Enfin, des données optiques et radars simulées ont été combinées afin d'évaluer l'apport de la fusion de telles données dans l'inversion des paramètres biophysiques. Les résultats expérimentaux ont montré des comportements différents de la réponse radar dans le temps suivant la saison, avec notamment l'apparition de cycles journaliers lors des périodes sans pluie, autant en zone tropicale que tempérée. De plus, il a été constaté que, alors que les paramètres biophysiques liés à l'humidité du bois et du sol entraînaient des variations du signal radar de l'ordre de 1 ou 2 dB, les paramètres liés à la géométrie des arbres et à la pente du sol donnaient des variations allant jusqu'à 5 à 7 dB. Finalement, le simulateur optiqueradar a montré l'utilisation qui pourrait être faite de telles données dans le cadre de l'inversion de paramètres biophysiques.

Background: Windscapes affect energy costs for flying animals, but animals can adjust their behavior to accommodate wind-induced energy costs. Theory predicts that flying animals should decrease air speed to compensate for increased tailwind speed and increase air speed to compensate for increased crosswind speed. In addition, animals are expected to vary their foraging effort in time and space to maximize energy efficiency across variable windscapes. Results: We examined the influence of wind on seabird (thick-billed murre Uria lomvia and black-legged kittiwake Rissa tridactyla) foraging behavior. Airspeed and mechanical flight costs (dynamic body acceleration and wing beat frequency) increased with headwind speed during commuting flights. As predicted, birds adjusted their airspeed to compensate for crosswinds and to reduce the effect of a headwind, but they could not completely compensate for the latter. As we were able to account for the effect of sampling frequency and wind speed, we accurately estimated commuting flight speed with no wind as 16.6 ms−1 (murres) and 10.6 ms−1 (kittiwakes). High winds decreased delivery rates of schooling fish (murres), energy (murres) and food (kittiwakes) but did not impact daily energy expenditure or chick growth rates. During high winds, murres switched from feeding their offspring with schooling fish, which required substantial above-water searching, to amphipods, which required less above-water searching. Conclusions: Adults buffered the adverse effect of high winds on chick growth rates by switching to other food sources during windy days or increasing food delivery rates when weather improved.

In this work, we give a new semantics to the notion of additivity as embodied by several discourse markers and particles in French: et, de plus and d'ailleurs. The common property of these di erent elements is the notion of independence of their arguments. We show that existing accounts of additive particles fail to do full justice to this notion of independence, and we propose a new semantics for and that captures this notion in a Bayesian fashion. We then evaluate the applicability of this analysis to de plus and d'ailleurs and show that, unlike et, these elements are strongly argumentative: they make an explicit reference to an external issue that is disputed in the current conversation.

Pour définir leur identité, les ostéopathes mettent fortement en valeur le récit des origines et la mémoire de leur fondateur, A.T. Still. Cette narration donne parfois lieu à des formes de ritualisation qui ne sont pas sans rapport avec des rites d'ordre religieux. A partir de l'observation de plusieurs situations dans des stages de formation organisés par des ostéopathes européens, les paradoxes d'une mémoire très investie affectivement sont explorés. Par le récit, c'est la figure du fondateur et la lignée qui l'unit aux ostéopathes contemporains qui sont mises en valeur tandis que le contenu de croyance fait l'objet d'une transmission beaucoup plus incertaine. Cette mémoire réussit le tour de force de faire de la référence à un auteur le gage d'une authenticité tout en récusant le contenu de son enseignement.

This paper examines the conditions of emergence of a hub in the distribution of wine. We illustrate this through a detailed discussion of wine distribution in Eastern Asia and an examination of the case of Hong Kong as an emerging regional wine hub. Indeed, the Hong Kong Special Administrative Region (HKSAR) Government has imposed zero import tax on wine since June 2008. Since then, the city has attracted a large volume of wine from foreign countries and established a wine bond warehouse in the Asia-Pacific. In total, only 16% of the wine has served for local Hong Kong consumption, with 84% being transshipped to Macau and Mainland China. Well positioned at the heart of Chinese business diaspora with good global connections, Hong Kong is currently capturing important value from wine trade. But its position may be threatened by competing hubs (i.e. Singapore), if these are able to adapt to the needs of the rapidly changing market of wine. To analyze this situation, this paper uses the concept of agility, explaining how market knowledge, flexibility and responsiveness are key elements for regional competitiveness.

This paper deals with the integrated thermal modelling of photovoltaic panels for the solar protection of buildings under strong solar radiation as encountered in tropical and humid conditions. The thermal model is integrated in a building simulation code and is able to predict the thermal impact of PV panels installed on buildings in several configurations and also their electric production. Basically, the PV panel is considered as a complex wall within which coupled heat transfer occur. Conduction, convection and radiation heat transfer equations are solved to simulate the global thermal behaviour of the building envelope including the PV panels. The model is first detailed, with a focus on the radiation modelling within the semi-transparent layers of the panels and then preliminary results are presented in terms of verification. Conclusions are finally drawn regarding the impact of the panels in terms of thermal insulation for summer tropical conditions.

Adaptive radiation therapy aims at compensating anatomical variations during a radiotherapy course by modifying the treatment plan accordingly. Therefore regions of interest (ROIs) have to be defined in the most recent imaging data. We investigated three different non‐rigid image registration algorithms to automatically propagate ROIs from an initial planning CT to a pre‐treatment CT. Copyright 2013 American Association of Physicists in Medicine. The electronic version of this paper can be found at the following location: http://dx.doi.org/10.1118/1.4814280.

The images authentication transmitted through the communication networks must verify the proof of the originality and robustness against the hacker attacks. The existing techniques such as the cryptographic methods are not sufficient. An effectiveness and robust solution is proposed in this paper. This solution is based on the watermarking of the video and especially the Motion JPEG stream. We focused on one of the major properties of the JPEG image which is the quantization matrix. The watermarking is performed on this matrix. We detail the obtained results against several attacks.

Biomolecules essentially fulfill their function through continual recognition of and binding to other molecules. Biomolecular recognition is therefore a phe- nomenon of prominent importance. When the progress of monoclonal antibody technology and genetic engineering allowed biologists to characterize and iso- late an impressive variety of receptor molecules, it was first felt that affinity constants and kinetic rates provided a satisfactory account of receptor-ligand interactions. However, a number of advances that occurred during the last two decades showed that i) the conventional framework was not sufficient to predict the behaviour of biomolecules in many physiologically relevant situations, ii) a number of techniques allowed investigators to dissect biomolecule interactions at the single bond level and obtain new information on the kinetic and mechanical properties of these interactions, iii) new theoretical techniques and the devel- opment of computer simulation as well as the enormous increase of available structural data provided new avenues to relate structural and functional proper- ties. The aim of this introductory chapter is to present a brief outline of these advances and pending issues.

: Advances in neuroscience are underpinned by large, multicenter studies and a mass of heterogeneous datasets. When investigating the relationships between brain anatomy and brain functions under normal and pathological conditions, measurements obtained from a broad range of brain imaging techniques are correlated with the information on each subject's neurologic states, cognitive assessments and behavioral scores derived from questionnaires and tests. The development of ontologies in neuroscience appears to be a valuable way of gathering and handling properly these heterogeneous data - particularly through the use of federated architectures. We recently proposed a multilayer ontology for sharing brain images and regions of interest in neuroimaging. Here, we report on an extension of this ontology to the representation of instruments used to assess brain and cognitive functions and behavior in humans. This extension consists of a 'core' ontology that accounts for the properties shared by all instruments supplemented by 'domain' ontologies that conceptualize standard instruments. We also specify how this core ontology has been refined to build domain ontologies dedicated to widely used instruments and how various scores used in the neurosciences are represented. Lastly, we discuss our design choices, the ontology's limitations and planned extensions aimed at querying and reasoning across distributed data sources.

Extending and modifying his domain of life by artifact production is one of the main characteristics of humankind. From the first hominid, who used a wood stick or a stone for extending his upper limbs and augmenting his gesture strength, to current systems engineers who used technologies for augmenting human cognition, perception and action, extending human body capabilities remains a big issue. From more than fifty years cybernetics, computer and cog-nitive sciences have imposed only one reductionist model of human machine systems: cognitive systems. Inspired by philosophy, behaviorist psychology and the information treatment metaphor, the cognitive system paradigm requires a function view and a functional analysis in human systems design process. Ac-cording that design approach, human have been reduced to his metaphysical and functional properties in a new dualism. Human body requirements have been left to physical ergonomics or "physiology". With multidisciplinary con-vergence, the issues of "human-machine" systems and "human artifacts" evolve. The loss of biological and social boundaries between human organisms and in-teractive and informational physical artifact questions the current engineering methods and ergonomic design of cognitive systems. New developpment of human machine systems for intensive care, human space activities or bio-engineering sytems requires grounding human systems design on a renewed epistemological framework for future human systems model and evidence based "bio-engineering". In that context, reclaiming human factors, augmented human and human machine nature is a necessity

This paper is dedicated to the study of an estimator of the generalized Hoeffd- ing decomposition. We build such an estimator using an empirical Gram-Schmidt approach and derive a consistency rate in a large dimensional setting. We then apply a greedy algorithm with these previous estimators to a sensitivity analysis. We also establish the consistency of this L2-boosting under sparsity assumptions of the signal to be analyzed. The paper concludes with numerical experiments, that demonstrate the low computational cost of our method, as well as its efficiency on the standard benchmark of sensitivity analysis.

This paper addresses the problem of estimating the extreme value index in presence of random censoring for distributions in the Weibull domain of attraction. The methodologies introduced in [Worms 2014], in the heavytailed case, are adapted here to the negative extreme value index framework, leading to weighted versions of the popular moments of relative excesses. This leads to the de nition of two families of estimators (with an adaptation of the so called Moment estimator as a particular case), for which the consistency is proved under a rst order condition. Illustration of their performance, coming from an extensive simulation study, are provided.

This communication is devoted to solar irradiance and irradiation short-term forecasts, which are useful for electricity production. Several different time series approaches are employed. Our results and the corresponding numerical simulations show that techniques which do not need a large amount of historical data behave better than those which need them, especially when those data are quite noisy.

À l'attention du déposant
Le dépôt doit être effectué en accord avec les co-auteurs et dans le respect de la politique des éditeurs
La mise en ligne est assujettie à une modération, la direction de HAL se réservant le droit de refuser les articles ne correspondant pas aux critères de l'archive (voir le guide du déposant )
Tout dépôt est définitif, aucun retrait ne sera effectué après la mise en ligne de l'article
Consulter le ManuHAL
Les fichiers textes au format pdf ou les fichiers images composant votre dépôt sont maintenant envoyés au CINES dans un contexte d'archivage à long terme.
À l'attention des lecteurs
Dans un contexte de diffusion électronique, tout auteur conserve ses droits intellectuels, notamment le fait de devoir être correctement cité et reconnu comme l'auteur d'un document.
Conditions d'utilisation
Les métadonnées de HAL peuvent être consultées de façon totale ou partielle par moissonnage OAI-PMH dans le respect du code de la propriété intellectuelle ;
Pas d'utilisation commerciale des données extraites ;
Obligation de citer la source (exemple : hal.archives-ouvertes.fr/hal-00000001).
Déposer