L'archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion d'articles scientifiques de niveau recherche, publiés ou non, et de thèses, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Derniers Dépôts
Chimie

Économie et finance quantitative

Informatique

Mathématiques

Physique

Planète et Univers

Science non linéaire

Sciences cognitives

Sciences de l'environnement

Sciences de l'Homme et Société

Sciences de l'ingénieur

Sciences du Vivant

Statistiques

Favored theories of giant planet formation center around two main paradigms, namely the core accretion model and the gravitational instability model. These two formation scenarios support the hypothesis that the giant planet metallicities should be higher or equal to that of the parent star. Meanwhile, spectra of the transiting hot Jupiter HD189733b suggest that carbon and oxygen abundances range from depleted to enriched with respect to the star. Here, using a model describing the formation sequence and composition of planetesimals in the protoplanetary disk, we determine the range of volatile abundances in the envelope of HD189733b that is consistent with the 20-80 M⊕ of heavy elements estimated to be present in the planet's envelope. We then compare the inferred carbon and oxygen abundances to those retrieved from spectroscopy, and we find a range of supersolar values that directly fit both spectra and internal structure models. In some cases, we find that the apparent contradiction between the subsolar elemental abundances and the mass of heavy elements predicted in HD189733b by internal structure models can be explained by the presence of large amounts of carbon molecules in the form of polycyclic aromatic hydrocarbons and soots in the upper layers of the envelope, as suggested by recent photochemical models. A diagnostic test that would confirm the presence of these compounds in the envelope is the detection of acetylene. Several alternative hypotheses that could also explain the subsolar metallicity of HD189733b are formulated: the possibility of differential settling in its envelope, the presence of a larger core that did not erode with time, a mass of heavy elements lower than the one predicted by interior models, a heavy element budget resulting from the accretion of volatile-poor planetesimals in specific circumstances, or the combination of all these mechanisms.

A luminescent liquid crystalline compound containing a bulky dispiro[fluorene-9,11′-indeno[1,2-b]fluorene-12′,9′′-fluorene] has been designed and synthesized by di-substitution of a bromo derivative with N-(4-ethynylphenyl)-3,4,5-tris(hexadecyloxy)benzamide fragments. This di-substituted 3π-2spiro derivative forms stable and well-organized mesophases over large temperature ranges. Combination of DSC, POM and SAXS analyses has revealed the formation of a lamellar mesophase between 60 and 150 °C followed by another mesophase with a 2-dimensional lattice of rectangular symmetry that remains up to the isotropization point near 225 °C. In the original molecular packing model deduced from SAXS, the tert-butyl terminal groups fill the centre of hollow columns constituted by both the dihydro(1,2-b)indenofluorene and benzamide fragments and separated from each other by the surrounding aliphatic tails. The merging of the columns yielding the lamellar phase turned out to be governed by the dynamics of both, the micro-phase segregation process and the network of hydrogen bonds. In the various mesomorphic states and in solution, a strong luminescence was observed. The emission spectrum however depends on temperature and drastically changes between both mesophases and the isotropic liquid. In particular, a strong modulation of the emission wavelength occurs at the isotropic to 2D phase transition. This luminescence modulation results from an enhanced contribution of the vibronic peaks at higher energies in the emission profile. The compound was also found to be soluble in 5CB and was integrated in a guest-host LC cell, allowing efficient modulation of the photoluminescence polarization, in the presence or absence of an electrical field.

Dans le cadre de la protection de l'environnement et du consommateur, CTC effectue des tests enchimie analytique sur de nombreux paramètres en matrices aqueuses, cuir et textile. Les nouvelles substancesmises sur le marché ainsi que les réglementations évoluant sans cesse, le développement de nouvellesméthodes d'analyses est donc nécessaire.Plusieurs méthodes analytiques ont ainsi été développées. Pour l'analyse des rejets d'effluents industriels desinstallations classées et pour l'analyse d'innocuité de produits utilisant le cuir ou le textile (chaussures,maroquinerie, prêt-à-porter...).Les chloroalcanes ont été dosés en chromatographie gazeuse (GC) associée à la spectrométrie de masse (MS)utilisant l'ionisation chimique, à la fois en matrices aqueuses (limite de quantification, LQ, à 0,6 μg/l) et sur lescuirs (LQ à 2 mg/kg).Une analyse des alkylphénols et alkylphénols ethoxylates a été développée pour les matrices aqueuses parGC/MS (LQ à 0,05 μg/l).Plusieurs familles de retardateurs de flammes ont ensuite été étudiées. Les polybromodiphénylethers peuventêtre dosés dans les eaux (LQ<0,05 µl) et le cuir (LQ <= µg/kg), par GC/MS en ionisationchimique.L'hexabromocyclododécane et des organophosphates, par chromatographie liquide et spectrométrie de masseen tandem pour des matrices textiles (LQ à 6 mg/kg). Des hydrocarbures aromatiques polycycliques dans le cuir ont ensuite été analysés par GC/MS-MS (LQ à 250 μg/kg).Enfin, une méthode multirésidus portant sur plusieurs familles de micropolluants organiques a été mise aupoint en GC/MS pour les rejets d'effluents (LQ <0,1 µg/l)

Chemical flooding is currently one of the most promising solution to increase the recovery of mature reservoirs. In Surfactant-Polymer (SP) processes, several parameters should be taken into account to estimate the return on investments: concentrations of the injected chemical species, slug sizes, initiation times, residual oil saturation, adsorption rates of the chemical species on the rock, etc. Some parameters are design parameters whereas other ones are uncertain. For operators, defining the optimal values of the first ones while considering the uncertainties related to the second ones, is not an easy task in practice. This work proposes a methodology to help handle this problem. Starting from a synthetic reservoir test case where an SP process is set up, we select design and uncertain parameters which may impact the production. In the reservoir simulator, for the sake of flexibility, some of them are tabulated functions, which enables the user to input any data coming from any system. However, point-wise modifications of these curves would soar the number of parameters. Therefore, a particular parameterization is introduced. We then propose a methodology based on Response-SurfaceModeling (RSM) to first approximate the oil production computed by a reservoir simulator for different values of our parameters and identify the most influential ones. This RSM is based on a Karhunen-Loe've decomposition of the time response of the reservoir simulator and on an approximation of the components of this decomposition by a Gaussian process. This technique allows us to obtain substantial savings of computation times when building the response surfaces. Once a good predictability is achieved, the surfaces are used to optimize the design of the SP process, taking economic parameters and uncertainties on the data into account without additional reservoir simulations.

The recent financial crisis has highlighted the necessity to introduce mixtures of probability distributions in order to improve the estimation of asset returns and in particular to better take account of risks. Since Pearson (1894), these mixtures have been intensively used in many scientific fields since they provide very convenient mathematical tools to examine various statistical data and to approximate many probability distributions. They are typically introduced to model the choice of probability distributions among a given parametric family. The coefficients of the mixture usually correspond to the relative frequencies of each possible parameter. In this framework, we examine the single-period portfolio choice model, which has been addressed in the partial equilibrium framework, by Brennan and Solanki (1981), Leland (1980) and Prigent (2006). We consider an investor who wants to maximize the expected utility of the value of his portfolio consisting of one risk-free asset and one risky asset. We provide and analyze the solution for log return with mixture distributions, in particular for the mixture Gaussian case. The optimal portfolio is characterized for arbitrary utility functions. Our results show that mixture of distributions can have significant implications on the portfolio management.

Causation between time series is a most important topic in econometrics, financial engineering, biological and psychological sciences, and many other fields. A new setting is introduced for examining this rather abstract concept. The corresponding calculations, which are much easier than those required by the celebrated Granger-causality, do not necessitate any deterministic or probabilistic modeling. Some convincing computer simulations are presented.

A superstring of a set of words is a string that contains each input word as a sub-string. Given such a set, the Shortest Superstring Problem (SSP) asks for a super-string of minimum length. SSP is an important theoretical problem related to the Asymmetric Travelling Salesman Problem, and also has practical applications in data compression and in bioinformatics. Indeed, it models the question of assembling a genome from a set of sequencing reads. Unfortunately, SSP is known to be NP-hard even on a binary alphabet and also hard to approximate with respect to the superstring length or to the compression achieved by the superstring. Even the variant in which all words share the same length r, called r-SSP, is NP-hard whenever r > 2. Numerous involved approximation algorithms achieve approximation ratio above 2 for the superstring, but remain difficult to implement in practice. In contrast the greedy conjecture asked in 1988 whether a simple greedy agglomeration algorithm achieves ratio of 2 for SSP. Here, we present a novel approach to bound the superstring approximation ratio with the compression ratio, which leads to a first proof of the greedy conjecture for 3-SSP.

This paper investigates the execution of tree-shaped task graphs using multiple processors. Each edge of such a tree represents some large data. A task can only be executed if all input and output data fit into memory, and a data can only be removed from memory after the completion of the task that uses it as an input data. Such trees arise, for instance, in the multifrontal method of sparse matrix factorization. The peak memory needed for the processing of the entire tree depends on the execution order of the tasks. With one processor the objective of the tree traversal is to minimize the required memory. This problem was well studied and optimal polynomial algorithms were proposed. Here, we extend the problem by considering multiple processors, which is of obvious interest in the application area of matrix factorization. With multiple processors comes the additional objective to minimize the time needed to traverse the tree, i.e., to minimize the makespan. Not surprisingly, this problem proves to be much harder than the sequential one. We study the computational complexity of this problem and provide inapproximability results even for unit weight trees. We design a series of practical heuristics achieving different trade-offs between the minimization of peak memory usage and makespan. Some of these heuristics are able to process a tree while keeping the memory usage under a given memory limit. The different heuristics are evaluated in an extensive experimental evaluation using realistic trees.

We investigate a resource allocation problem in a multi-class server with convex holding costs and user impatience under the average cost criterion. In general, the optimal policy has a complex dependency on all the input parameters and state information. Our main contribution is to derive index policies that can serve as heuristics and are shown to give good performance. Our index policy attributes to each class an index, which depends on the number of customers currently present in that class. The index values are obtained by solving a relaxed version of the optimal stochastic control problem and combining results from restless multi-armed bandits and queueing theory. They can be expressed as a function of the steady-state distribution probabilities of a one-dimensional birth-and-death process. For linear holding cost, the index can be calculated in closed-form and turns out to be independent of the arrival rates and the number of customers present. In the case of no abandonments and linear holding cost, our index coincides with the $c\mu$-rule, which is known to be optimal in this simple setting. For general convex holding cost we derive properties of the index value in limiting regimes: we consider the behavior of the index (i) as the number of customers in a class grows large, which allows us to derive the asymptotic structure of the index policies, (ii) as the abandonment rate vanishes, which allows us to retrieve an index policy proposed for the multi-class M/M/1 queue with convex holding cost and no abandonments, and (iii) as the arrival rate goes to either 0 or $\infty$, representing light- and heavy-traffic regimes, respectively. We show that Whittle's index policy is asymptotically optimal in both light- and heavy-traffic regimes.To obtain further insights into the index policy, we consider the fluid version of the relaxed problem and derive a closed-form expression for the fluid index. The latter coincides with the stochastic model in case of linear holding costs. For arbitrary convex holding cost the fluid index can be seen as the $Gc\mu/\theta$-rule, that is, including abandonments into the generalized $c\mu$-rule ($Gc\mu$-rule). Numerical experiments show that our index policies become optimal as the load in the system increases.

We review different properties related to the Cauchy problem for the (nonlinear) Schrodinger equation with a smooth potential. For energy-subcritical nonlinearities and at most quadratic potentials, we investigate the necessary decay in space in order for the Cauchy problem to be locally (and globally) well-posed. The characterization of the minimal decay is different in the case of super-quadratic potentials.

We prove that nonlinear Gibbs measures can be obtained from the corresponding many-body, grand-canonical, quantum Gibbs states, in a mean-field limit where the temperature T diverges and the interaction behaves as 1/T. We proceed by characterizing the interacting Gibbs state as minimizing a functional counting the free-energy relatively to the non-interacting case. We then perform an infinite-dimensional analogue of phase-space semiclassical analysis, using fine properties of the quantum relative entropy, the link between quantum de Finetti measures and upper/lower symbols in a coherent state basis, as well as Berezin-Lieb type inequalities. Our results cover the measure built on the defocusing nonlinear Schrödinger functional on a finite interval, as well as smoother interactions in dimensions $d\geq2$.

There are well-established connections between combinatorial optimization, optimal transport theory and Hydrodynamics, through the linear assignment problem in combinatorics, the Monge-Kantorovich problem in optimal transport theory and the model of inviscid, potential, pressure-less fluids in Hydrodynamics. Here, we consider the more challenging quadratic assignment problem (which is NP, while the linear assignment problem is just P) and find, in some particular case, a correspondence with the problem of finding stationary solutions of Euler's equations for incompressible fluids. For that purpose, we introduce and analyze a suitable "gradient flow" equation. Combining some ideas of P.-L. Lions (for the Euler equations) and Ambrosio-Gigli-Savaré (for the heat equation), we provide for the initial value problem a concept of generalized ''dissipative'' solutions which always exist globally in time and are unique whenever theyare smooth.

We prove that nonlinear Gibbs measures can be obtained from the corresponding many-body, grand-canonical, quantum Gibbs states, in a mean-field limit where the temperature T diverges and the interaction behaves as 1/T. We proceed by characterizing the interacting Gibbs state as minimizing a functional counting the free-energy relatively to the non-interacting case. We then perform an infinite-dimensional analogue of phase-space semiclassical analysis, using fine properties of the quantum relative entropy, the link between quantum de Finetti measures and upper/lower symbols in a coherent state basis, as well as Berezin-Lieb type inequalities. Our results cover the measure built on the defocusing nonlinear Schrödinger functional on a finite interval, as well as smoother interactions in dimensions $d\geq2$.

In vitro investigation of neural architectures requires cell positioning. For that purpose, micro-magnets have been developed on silicon substrates and combined with chemical patterning to attract cells to adhesive sites and keep them there during incubation. We have shown that the use of micro-magnets allows to achieve a high filling factor (∼90%) of defined adhesive sites in neural networks and prevents migration of cells during growth. This approach has great potential for neural interfacing by providing accurate and time-stable coupling with integrated nanodevices.

Upper bound limit analysis allows one to evaluate directly the ultimate load of structures without performing a cumbersome incremental analysis. In order to numerically apply this method to thin plates in bending, several authors have proposed to use various finite elements discretizations. We provide in this paper a mathematical analysis which ensures the convergence of the finite element method, even with finite elements with discontinuous derivatives such as the quadratic 6 node Lagrange triangles and the cubic Hermite triangles. More precisely, we prove the $\Gamma$-convergence of the discretized problems towards the continuous limit analysis problem. Numerical results illustrate the relevance of this analysis for the yield design of both homogeneous and non-homogeneous materials.

Rapide historique de la géologie à la Sorbonne de 1809 à 1969, présenté à travers l'histoire des chaires. La fin de la période étudiée correspond au transfert des laboratoires de la Faculté des Sciences de Paris de la Sorbonne dans les bâtiments nouvellement construits sur l'ancienne "Halle aux vins", entre la place Jussieu et le quai Saint-Bernard.

This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).

Favored theories of giant planet formation center around two main paradigms, namely the core accretion model and the gravitational instability model. These two formation scenarios support the hypothesis that the giant planet metallicities should be higher or equal to that of the parent star. Meanwhile, spectra of the transiting hot Jupiter HD189733b suggest that carbon and oxygen abundances range from depleted to enriched with respect to the star. Here, using a model describing the formation sequence and composition of planetesimals in the protoplanetary disk, we determine the range of volatile abundances in the envelope of HD189733b that is consistent with the 20-80 M⊕ of heavy elements estimated to be present in the planet's envelope. We then compare the inferred carbon and oxygen abundances to those retrieved from spectroscopy, and we find a range of supersolar values that directly fit both spectra and internal structure models. In some cases, we find that the apparent contradiction between the subsolar elemental abundances and the mass of heavy elements predicted in HD189733b by internal structure models can be explained by the presence of large amounts of carbon molecules in the form of polycyclic aromatic hydrocarbons and soots in the upper layers of the envelope, as suggested by recent photochemical models. A diagnostic test that would confirm the presence of these compounds in the envelope is the detection of acetylene. Several alternative hypotheses that could also explain the subsolar metallicity of HD189733b are formulated: the possibility of differential settling in its envelope, the presence of a larger core that did not erode with time, a mass of heavy elements lower than the one predicted by interior models, a heavy element budget resulting from the accretion of volatile-poor planetesimals in specific circumstances, or the combination of all these mechanisms.

A fast and accurate texture recognition system is presented. The new approach consists in extracting locally and globally invariant representations. The locally invariant representation is built on a multi-resolution convolutional net- work with a local pooling operator to improve robustness to local orientation and scale changes. This representation is mapped into a globally invariant descriptor using multifractal analysis. We propose a new multifractal descriptor that cap- tures rich texture information and is mathematically invariant to various complex transformations. In addition, two more techniques are presented to further im- prove the robustness of our system. The first technique consists in combining the generative PCA classifier with multiclass SVMs. The second technique consists of two simple strategies to boost classification results by synthetically augment- ing the training set. Experiments show that the proposed solution outperforms existing methods on three challenging public benchmark datasets, while being computationally efficient.

The derivation of Debye shielding and Landau damping from the $N$-body description of plasmas is performed directly by using Newton's second law for the $N$-body system. This is done in a few steps with elementary calculations using standard tools of calculus, and no probabilistic setting. Unexpectedly, Debye shielding is encountered together with Landau damping. This approach is shown to be justified in the one-dimensional case when the number of particles in a Debye sphere becomes large. The theory is extended to accommodate a correct description of trapping and chaos due to Langmuir waves. Shielding and collisional transport are found to be two related aspects of the repulsive deflections of electrons, in such a way that each particle is shielded by all other ones while keeping in uninterrupted motion.

Global modeling aims to build mathematical models of concise description. Polynomial Model Search (PoMoS) and Global Modeling (GloMo) are two complementary algorithms (freely downloadable at the following address: http://www.cesbio.ups-tlse.fr/us/pomos_et_glomo.html) designed for the modeling of observed dynamical systems based on a small set of time series. Models considered in these algorithms are based on ordinary differential equations built on a polynomial formulation. More specifically, PoMoS aims at finding polynomial formulations from a given set of 1 to N time series, whereas GloMo is designed for single time series and aims to identify the parameters for a selected structure. GloMo also provides basic features to visualize integrated trajectories and to characterize their structure when it is simple enough: One allows for drawing the first return map for a chosen Poincare section in the reconstructed space; another one computes the Lyapunov exponent along the trajectory. In the present paper, global modeling from single time series is considered. A description of the algorithms is given and three examples are provided. The first example is based on the three variables of the Rossler attractor. The second one comes from an experimental analysis of the copper electrodissolution in phosphoric acid for which a less parsimonious global model was obtained in a previous study. The third example is an exploratory case and concerns the cycle of rainfed wheat under semiarid climatic conditions as observed through a vegetation index derived from a spatial sensor.

In this paper we examined plan continuation error (PCE), a well known error made by pilots consisting in continuing the flight plan despite adverse meteorological conditions. Our hypothesis is that a large range of strong negative emotional consequences, including those induced by economic pressure, are associated with the decision to revise the flight plan and favor PCE. We investigated the economic hypothesis with a simplified landing task (reproduction of a real aircraft instrument) in which uncertainty and reward were manipulated. Heart rate (HR), heart rate variability (HRV) and eye tracking measurements were performed to get objective clues both on the cognitive and emotional state of the volunteers. Results showed that volunteers made more risky decisions under the influence of the financial incentive, in particular when uncertainty was high. Psychophysiological examination showed that HR increased and total HRV decreased in response to the cognitive load generated by the task. In addition, HR also increased in response to the financially motivated condition. Eventually, fixation times increased when uncertainty was high, confirming the difficulty in obtaining/interpreting information from the instrument in this condition. These results support the assumption that risky-decision making observed in pilots can be, at least partially, explained by a shift from cold to hot (emotional) decision-making in response to economic constraints and uncertainty.

Standardized neurofeedback (NF) protocols have been extensively evaluated in attention-deficit/hyperactivity disorder (ADHD). However, such protocols do not account for the large EEG heterogeneity in ADHD. Thus, individualized approaches have been suggested to improve the clinical outcome. In this direction, an open-label pilot study was designed to evaluate a NF protocol of relative upper alpha power enhancement in fronto-central sites. Upper alpha band was individually determined using the alpha peak frequency as an anchor point. 20 ADHD children underwent 18 training sessions. Clinical and neurophysiological variables were measured pre- and post-training. EEG was recorded pre- and post-training, and pre- and post-training trials within each session, in both eyes closed resting state and eyes open task-related activity. A power EEG analysis assessed long-term and within-session effects, in the trained parameter and in all the sensors in the (1-30) Hz spectral range. Learning curves over sessions were assessed as well. Parents rated a clinical improvement in children regarding inattention and hyperactivity/impulsivity. Neurophysiological tests showed an improvement in working memory, concentration and impulsivity (decreased number of commission errors in a continuous performance test). Relative and absolute upper alpha power showed long-term enhancement in task-related activity, and a positive learning curve over sessions. The analysis of within-session effects showed a power decrease ("rebound" effect) in task-related activity, with no significant effects during training trials. We conclude that the enhancement of the individual upper alpha power is effective in improving several measures of clinical outcome and cognitive performance in ADHD. This is the first NF study evaluating such a protocol in ADHD. A controlled evaluation seems warranted due to the positive results obtained in the current study.

Extending and modifying his domain of life by artifact production is one of the main characteristics of humankind. From the first hominid, who used a wood stick or a stone for extending his upper limbs and augmenting his gesture strength, to current systems engineers who used technologies for augmenting human cognition, perception and action, extending human body capabilities remains a big issue. From more than fifty years cybernetics, computer and cog-nitive sciences have imposed only one reductionist model of human machine systems: cognitive systems. Inspired by philosophy, behaviorist psychology and the information treatment metaphor, the cognitive system paradigm requires a function view and a functional analysis in human systems design process. Ac-cording that design approach, human have been reduced to his metaphysical and functional properties in a new dualism. Human body requirements have been left to physical ergonomics or "physiology". With multidisciplinary con-vergence, the issues of "human-machine" systems and "human artifacts" evolve. The loss of biological and social boundaries between human organisms and in-teractive and informational physical artifact questions the current engineering methods and ergonomic design of cognitive systems. New developpment of human machine systems for intensive care, human space activities or bio-engineering sytems requires grounding human systems design on a renewed epistemological framework for future human systems model and evidence based "bio-engineering". In that context, reclaiming human factors, augmented human and human machine nature is a necessity

This paper describes a global sensitivity analysis of a fractal-based turbulence-induced flocculation model. The quantities of interest in this analysis are related to the floc diameters in two different configurations. The input parameters with which the sensitivity analyses are performed are the floc aggregation and breakup parameters, the fractal dimension and the diameter of the primary particles. Two related versions of the flocculation model are considered, evenly encountered in the literature: (i) using a dimensional floc breakup parameter, and (ii) using a non-dimensional floc breakup parameter. The main results of the sensitivity analyses are that only two parameters of model (ii) are significant (aggregation and breakup parameters) and that the relationships between parameter and quantity of interest remain simple. Contrarily, with model (i), all parameters have to be considered. When identifying model parameters based on measures of floc diameters, this analysis hence suggests the use of model (ii) rather than (i). Further, improved models of the fractal dimension do not seem to be required when using the non-dimensional model (ii).

L'augmentation importante de la population mondiale, et par conséquent de ses besoins, exerce une pression de plus en plus importante sur les surfaces forestières. L'outil le mieux adapté au suivi des forêts, à l'échelle du globe, est la télédétection satellitaire. C'est dans ce contexte que se situe ce travail de thèse, qui vise à améliorer l'estimation des paramètres biophysiques des forêts à partir de données de télédétection. L'originalité de ce travail a été d'étudier cette estimation des paramètres biophysiques en menant plusieurs études de sensibilité avec une démarche expérimentale et sur des données simulées. Tout d'abord, l'étude a porté sur des séries temporelles de mesures de diffusiométrie radar obtenues sur deux sites : l'un constitué d'un cèdre en zone tempérée et l'autre d'une parcelle de forêt tropicale. Cette étude de sensibilité a été poursuivie en imageant, avec une résolution élevée, plusieurs parcelles aux configurations différentes à l'intérieur d'une forêt de pin. Enfin, des données optiques et radars simulées ont été combinées afin d'évaluer l'apport de la fusion de telles données dans l'inversion des paramètres biophysiques. Les résultats expérimentaux ont montré des comportements différents de la réponse radar dans le temps suivant la saison, avec notamment l'apparition de cycles journaliers lors des périodes sans pluie, autant en zone tropicale que tempérée. De plus, il a été constaté que, alors que les paramètres biophysiques liés à l'humidité du bois et du sol entraînaient des variations du signal radar de l'ordre de 1 ou 2 dB, les paramètres liés à la géométrie des arbres et à la pente du sol donnaient des variations allant jusqu'à 5 à 7 dB. Finalement, le simulateur optiqueradar a montré l'utilisation qui pourrait être faite de telles données dans le cadre de l'inversion de paramètres biophysiques.

Background: Windscapes affect energy costs for flying animals, but animals can adjust their behavior to accommodate wind-induced energy costs. Theory predicts that flying animals should decrease air speed to compensate for increased tailwind speed and increase air speed to compensate for increased crosswind speed. In addition, animals are expected to vary their foraging effort in time and space to maximize energy efficiency across variable windscapes. Results: We examined the influence of wind on seabird (thick-billed murre Uria lomvia and black-legged kittiwake Rissa tridactyla) foraging behavior. Airspeed and mechanical flight costs (dynamic body acceleration and wing beat frequency) increased with headwind speed during commuting flights. As predicted, birds adjusted their airspeed to compensate for crosswinds and to reduce the effect of a headwind, but they could not completely compensate for the latter. As we were able to account for the effect of sampling frequency and wind speed, we accurately estimated commuting flight speed with no wind as 16.6 ms−1 (murres) and 10.6 ms−1 (kittiwakes). High winds decreased delivery rates of schooling fish (murres), energy (murres) and food (kittiwakes) but did not impact daily energy expenditure or chick growth rates. During high winds, murres switched from feeding their offspring with schooling fish, which required substantial above-water searching, to amphipods, which required less above-water searching. Conclusions: Adults buffered the adverse effect of high winds on chick growth rates by switching to other food sources during windy days or increasing food delivery rates when weather improved.

Investigations into China's law and legal system invariably presume that China's many regulatory problems are problems of a regulatory laggard--that they are problems that stem from China's failure to as yet construct a mature legal system, such as those found in the advanced industrial countries of the so-called ―West‖ (particularly that of the United States). In this paper, I argue that this is not necessarily the case. China may actually, in many aspects, be operating at the very forefront of the regulatory horizon, compelled by its location in a distinctly post-Fordist Asian economic world to confront regulatory problems that are just beginning to seep into the still largely ―Fordist‖ West. Many of China's regulatory problems, in other words, may often be those of a regulatory pioneer, not those of a regulatory laggard, in the sense that many of its regulatory problems are likely to be problems of the post-Fordist world into which we are moving, rather than problems of the Fordist world from which our present regulatory understandings are derived.

This paper contributes to the debate on the French public finances' consolidation by investigating the long-term sustainability of France's fiscal position. We trace the historical trends of government's tax receipts and expenditures. We find that while the level of public expenditure in France is larger than in the rest of the Euro Area (mostly because of public wages and social benefits), its trend is comparable to its neighbours. Net lending is also under control, thanks to the high levels of taxation, so that we see no real risk of future unsustainability. However, the French tax system is unfair, is not sufficiently progressive, and is too complex. The paper then proceeds to assess the future of France's public finances on the basis of the current debate on the Euro Area fiscal rules. We report two analyses - theoretical and empirical - that project the inflation rate and output gap paths for the next twenty years. We finally assess fiscal rules on this ground. The 'fiscal compact' fares rather poorly compared to the alternative rules that we assess.

So, do we need growth and might we learn to live without it? Nearly all of us who write regularly for SPERI Comment have at least nodded in the direction of the need to think beyond growth in some way. But we have typically left it at that, with the question of what 'beyond growth' actually means left unresolved. In fact, most of us (and I include myself here) have done rather worse than that - in that having identified the need to think beyond growth we have then reverted to the more familiar (and simpler) task of considering how growth (albeit a more sustainable growth) might be restored to our ailing economies. We all know this won't do (...).

L'électroperméabilisation est une technique permettant, entre autre, l'entrée de molécules cytotoxiques dans les tumeurs. Elle consiste en la perméabilisation transitoire de la membrane plasmique suite à l'application de champs électriques pulsés. Certaines conditions électriques permettent le transfert de gène, ouvrant le champ d'application de la technique à la thérapie génique. Cette thèse s'est intéressée à étudier les effets des champs électriques sur cellules et tissus, dans le cas de l'électro-transfert de gène. En effet, la compréhension mécanistique de ce transfert est indispensable à l'optimisation de la technique pour les futures applications cliniques. Dans ce contexte, nous nous sommes attachés à étudier les 3 barrières rencontrées par le gène lors de son transfert, à savoir la complexité de l'environnement multicellulaire au niveau du tissu, la membrane plasmique et l'enveloppe nucléaire au niveau de la cellule. i) L'efficacité de l'electrotransfer de gène a été étudié sur le modèle de tumeur in vitro/ex vivo dit sphéroïde. Dans un premier temps ce modèle a été validé pour l'étude de l'électrotransfection et dans un deuxième temps les raisons de l'absence d'efficacité en structure tissulaire ont été mises en évidence et l'optimisation de la technique a été amorcée. ii) Une deuxième partie a été dédiée à l'étude nano-mécanique des cellules à l'échelle de la membrane plasmique par microscopie à force atomique. La microscopie à force atomique a été utilisée afin d'imager et mesurer par spectroscopie de force l'effet de l'électroperméabilisation sur la membrane plasmique. Nous avons imagé la perturbation membranaire et mesuré une diminution d'élasticité membranaire suivant l'application des champs électriques. Ce phénomène a été relié aux effets secondaires de l'électroperméabilisation affectant l'actine corticale. iii) Une dernière partie s'est intéressée aux effets des nanopulses. Ces impulsions très courtes (ns) et intenses (plusieurs kV/cm) représentent la nouvelle génération d'impulsions, dont les effets sont encore peu décrits, mais pourraient permettre une déstabilisation spécifique de l'enveloppe des organelles. L'impact de ses impulsions nanosecondes sur la membrane ont été analysée par Patch-Clamp pour déterminer l'implication du cytosquelette d'actine dans la forme des nanopores créés. Dans un deuxième temps leur impact sur l'enveloppe nucléaire a été étudié, dans le but de déterminer d'éventuels effets néfastes sur le fonctionnement cellulaire, et la potentielle augmentation de transfection résultant d'une déstabilisation de la deuxième barrière rencontré par le gène lors de son transfert. Il est montré que l'actine ne joue pas de rôle dans la formation des nanopores, et que les impulsions nanosecondes ne permettent pas d'augmenter l'efficacité de transfection. En conclusion ces travaux ont apporté de nouveaux éléments dans la compréhension du mécanisme d'électroporation et des barrières au transfert de gène. Des protocoles, modèles, et outils ont été mis en place et sont aujourd'hui validés et disponibles pour une investigation poussée des effets des champs électriques sur le vivant.

Upper bound limit analysis allows one to evaluate directly the ultimate load of structures without performing a cumbersome incremental analysis. In order to numerically apply this method to thin plates in bending, several authors have proposed to use various finite elements discretizations. We provide in this paper a mathematical analysis which ensures the convergence of the finite element method, even with finite elements with discontinuous derivatives such as the quadratic 6 node Lagrange triangles and the cubic Hermite triangles. More precisely, we prove the $\Gamma$-convergence of the discretized problems towards the continuous limit analysis problem. Numerical results illustrate the relevance of this analysis for the yield design of both homogeneous and non-homogeneous materials.

This paper deals with the integrated thermal modelling of photovoltaic panels for the solar protection of buildings under strong solar radiation as encountered in tropical and humid conditions. The thermal model is integrated in a building simulation code and is able to predict the thermal impact of PV panels installed on buildings in several configurations and also their electric production. Basically, the PV panel is considered as a complex wall within which coupled heat transfer occur. Conduction, convection and radiation heat transfer equations are solved to simulate the global thermal behaviour of the building envelope including the PV panels. The model is first detailed, with a focus on the radiation modelling within the semi-transparent layers of the panels and then preliminary results are presented in terms of verification. Conclusions are finally drawn regarding the impact of the panels in terms of thermal insulation for summer tropical conditions.

A superstring of a set of words is a string that contains each input word as a sub-string. Given such a set, the Shortest Superstring Problem (SSP) asks for a super-string of minimum length. SSP is an important theoretical problem related to the Asymmetric Travelling Salesman Problem, and also has practical applications in data compression and in bioinformatics. Indeed, it models the question of assembling a genome from a set of sequencing reads. Unfortunately, SSP is known to be NP-hard even on a binary alphabet and also hard to approximate with respect to the superstring length or to the compression achieved by the superstring. Even the variant in which all words share the same length r, called r-SSP, is NP-hard whenever r > 2. Numerous involved approximation algorithms achieve approximation ratio above 2 for the superstring, but remain difficult to implement in practice. In contrast the greedy conjecture asked in 1988 whether a simple greedy agglomeration algorithm achieves ratio of 2 for SSP. Here, we present a novel approach to bound the superstring approximation ratio with the compression ratio, which leads to a first proof of the greedy conjecture for 3-SSP.

Comprehensive terminology is essential for a community to describe, exchange, and retrieve data. In multiple domain, the explosion of text data produced has reached a level for which automatic terminology extraction and enrichment is mandatory. Automatic Term Extraction (or Recognition) methods use natural language processing to do so. Methods featuring linguistic and statistical aspects as often proposed in the literature, solve some problems related to term extraction as low frequency, complexity of the multi-word term extraction, human effort to validate candidate terms. In contrast, we present two new measures for extracting and ranking muli-word terms from domain-specific corpora, covering the all mentioned problems. In addition we demonstrate how the use of the Web to evaluate the significance of a multi-word term candidate, helps us to outperform precision results obtain on the biomedical GENIA corpus with previous reported measures such as C-value.

Biomolecules essentially fulfill their function through continual recognition of and binding to other molecules. Biomolecular recognition is therefore a phe- nomenon of prominent importance. When the progress of monoclonal antibody technology and genetic engineering allowed biologists to characterize and iso- late an impressive variety of receptor molecules, it was first felt that affinity constants and kinetic rates provided a satisfactory account of receptor-ligand interactions. However, a number of advances that occurred during the last two decades showed that i) the conventional framework was not sufficient to predict the behaviour of biomolecules in many physiologically relevant situations, ii) a number of techniques allowed investigators to dissect biomolecule interactions at the single bond level and obtain new information on the kinetic and mechanical properties of these interactions, iii) new theoretical techniques and the devel- opment of computer simulation as well as the enormous increase of available structural data provided new avenues to relate structural and functional proper- ties. The aim of this introductory chapter is to present a brief outline of these advances and pending issues.

In machine learning, the domain adaptation problem arrives when the test (target) and the train (source) data are generated from different distributions. A key applied issue is thus the design of algorithms able to generalize on a new distribution, for which we have no label information. We focus on learning classification models defined as a weighted majority vote over a set of real-val ued functions. In this context, Germain et al. (2013) have shown that a measure of disagreement between these functions is crucial to control. The core of this measure is a theoretical bound--the C-bound (Lacasse et al., 2007)--which involves the disagreement and leads to a well performing majority vote learning algorithm in usual non-adaptative supervised setting: MinCq. In this work, we propose a framework to extend MinCq to a domain adaptation scenario. This procedure takes advantage of the recent perturbed variation divergence between distributions proposed by Harel and Mannor (2012). Justified by a theoretical bound on the target risk of the vote, we provide to MinCq a target sample labeled thanks to a perturbed variation-based self-labeling focused on the regions where the source and target marginals appear similar. We also study the influence of our self-labeling, from which we deduce an original process for tuning the hyperparameters. Finally, our framework called PV-MinCq shows very promising results on a rotation and translation synthetic problem.

This paper is dedicated to the study of an estimator of the generalized Hoeffd- ing decomposition. We build such an estimator using an empirical Gram-Schmidt approach and derive a consistency rate in a large dimensional setting. We then apply a greedy algorithm with these previous estimators to a sensitivity analysis. We also establish the consistency of this L2-boosting under sparsity assumptions of the signal to be analyzed. The paper concludes with numerical experiments, that demonstrate the low computational cost of our method, as well as its efficiency on the standard benchmark of sensitivity analysis.

This paper addresses the problem of estimating the extreme value index in presence of random censoring for distributions in the Weibull domain of attraction. The methodologies introduced in [Worms 2014], in the heavytailed case, are adapted here to the negative extreme value index framework, leading to weighted versions of the popular moments of relative excesses. This leads to the de nition of two families of estimators (with an adaptation of the so called Moment estimator as a particular case), for which the consistency is proved under a rst order condition. Illustration of their performance, coming from an extensive simulation study, are provided.

À l'attention du déposant
Le dépôt doit être effectué en accord avec les co-auteurs et dans le respect de la politique des éditeurs
La mise en ligne est assujettie à une modération, la direction de HAL se réservant le droit de refuser les articles ne correspondant pas aux critères de l'archive (voir le guide du déposant )
Tout dépôt est définitif, aucun retrait ne sera effectué après la mise en ligne de l'article
Consulter le ManuHAL
Les fichiers textes au format pdf ou les fichiers images composant votre dépôt sont maintenant envoyés au CINES dans un contexte d'archivage à long terme.
À l'attention des lecteurs
Dans un contexte de diffusion électronique, tout auteur conserve ses droits intellectuels, notamment le fait de devoir être correctement cité et reconnu comme l'auteur d'un document.
Conditions d'utilisation
Les métadonnées de HAL peuvent être consultées de façon totale ou partielle par moissonnage OAI-PMH dans le respect du code de la propriété intellectuelle ;
Pas d'utilisation commerciale des données extraites ;
Obligation de citer la source (exemple : hal.archives-ouvertes.fr/hal-00000001).
Déposer