L'archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion d'articles scientifiques de niveau recherche, publiés ou non, et de thèses, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Derniers Dépôts
Chimie
Économie et finance quantitative
Informatique
Mathématiques
Physique
Planète et Univers
Science non linéaire
Sciences cognitives
Sciences de l'environnement
Sciences de l'Homme et Société
Sciences de l'ingénieur
Sciences du Vivant
Statistiques
Cylindrical pieces of wood which can be placed in 10mm o.d. NMR sample tubes have been impregnated by an aqueous solution of sodium nitrite (NaNO2). They were subsequently dried and examined by 14N quadrupole resonance. To our surprise, we were able to observe a signal at the right frequency (4.64 MHz, the highest frequency NaNO2 line). This implies that the material has been properly embedded but also that sodium nitrite was properly recrystallized. This latter point was more than speculative. Anyway, we could observe that it works for spruce, beech, ash, maritime pine, but not for oak (which is known to be difficult to impregnate). Instrumental parameters have been optimized for reducing the duration of the experiment. If the first measurements required around four hours, we obtain at the present time acceptable results in about four minutes or less. In addition, we have designed a simple NLLS (Non Linear Least Squares) algorithm by which the spectral parameters of the NQR signal can be retrieved even if the peak is hardly visible in the frequency domain. Extensive measurements have been performed on NaNO2 recrystallized in spruce. They show, among other things, that the totality of NaNO2 which has penetrated the material has actually recrystallized and seemingly for ever. Various assays have been carried out as a a function of impregnation and drying conditions. The evolution of the width of the NaNO2 line reflects defects in the crystal lattice of NaNO2 due probably to alteration of the wood structure.
The Mullins' effect remains a major challenge in order to provide good mechanical modeling of the complex behavior of industrial rubber materials. It's been forty years since Mullins [1] wrote his review on the phenomenon and still no general agreement has been found either on the physical source or on the mechanical modeling of this effect. Therefore, we reviewed the literature dedicated to this topic over the past six decades. We present the experimental evidences, which characterize the Mullins' softening. The phenomenon is observed in filled and crystallizing rubbers. Then, the phenomenological models dedicated to fit the mechanical behavior of rubbers undergoing some Mullins' softening are studied. To overcome the limit of a descriptive phenomenological modeling, several authors looked for a physical understanding of the phenomenon. Various theories have been exposed, but none of them has been supported unanimously. Nonetheless, these theories favor the emergence of physically based mechanical behavior laws. We tested some of these laws, which show little predictive abilities since the values of their parameters do not compare well with the physical quantities they are linked to.
For equiatomic MgNi which can be hydrogenated up to the composition MgNiH1.6 at an absorption/desorption temperature of 200 °C, the effects of hydrogen absorption are approached with the model structures MgNiH, MgNiH2 and MgNiH3. From full geometry optimization and calculated cohesive energies obtained within DFT, the MgNiH2 composition close to the experimental limit is identified as most stable. Charge density analysis shows an increasingly covalent character of hydrogen: MgNiH (H−0.67) → MgNiH2 (H−0.63) → MgNiH3 (H−0.55). While Mg-Ni bonding prevails in MgNi and hydrogenated model phases, extra itinerant low-energy Ni states appear when hydrogen is introduced signaling Ni-H bonding which prevails over Mg-H as evidenced from total energy calculations and chemical bonding analyses.
In this work, we propose to price Parisian options using Laplace transforms. Not only do we compute the Laplace transforms of all the different Parisian options, but we also explain how to invert them numerically. We prove the accuracy of the numerical inversion.
Tree methods are among the most popular numerical methods to price financial derivatives. Mathematically speaking, they are easy to understand and do not require severe implementation skills to obtain algorithms to price financial derivatives. Tree methods basically consist in approximating the diffusion process modeling the underlying asset price by a discrete random walk. In this contribution, we provide a survey of tree methods for equity options, which focus on multiplicative binomial Cox-Ross-Rubinstein model.
The possibility that the collective dynamics of a set of stocks could lead to a speci c basket violating the e cient market hypothesis is investigated. Precisely, we show that it is systematically possible to form a basket with a non-trivial autocorrelation structure when the examined time scales are of the order of tens of seconds. Moreover, we show that this situation is persistent enough to allow some kind of forecasting.
We consider the problem of reliably broadcasting information in a multihop asynchronous network, despite the presence of Byzantine failures: some nodes are malicious and behave arbitrarly. We focus on non-cryptographic solutions. Most existing approaches give conditions for perfect reliable broadcast (all correct nodes deliver the good information), but require a highly connected network. A probabilistic approach was recently proposed for loosely connected networks: the Byzantine failures are randomly distributed, and the correct nodes deliver the good information with high probability. A first solution require the nodes to initially know their position on the network, which may be difficult or impossible in self-organizing or dynamic networks. A second solution relaxed this hypothesis but has much weaker Byzantine tolerance guarantees. In this paper, we propose a parameterizable broadcast protocol that does not require nodes to have any knowledge about the network. We give a deterministic technique to compute a set of nodes that always deliver authentic information, for a given set of Byzantine failures. Then, we use this technique to experimentally evaluate our protocol, and show that it significantely outperforms previous solutions with the same hypotheses.
This paper addresses the optimization of protection strategies in critical infrastructures within a complex network systems perspective. The focus is on cascading failures triggered by the intentional removal of a single network component. Three different protection strategies are proposed that minimize the consequences of cascading failures on the entire system, on predetermined areas or on both scales of protective intervention in a multi-objective optimization framework. We optimize the three protection strategies by devising a modified binary differential evolution scheme that overcomes the combinatorial complexity of this optimization problem. We exemplify our methodology with reference to the topology of an electricity infrastructure, i.e. the 380 kV Italian power transmission network. We only focus on the structure of this network as a test case for the suggested protection strategies, with no further reference on its physical and electrical properties.
A multi-objective power unit commitment problem is framed to consider simultaneously the objectives of minimizing the operation cost and minimizing the emissions from the generation units. To find the solution of the optimal schedule of the generation units, a memetic evolutionary algorithm is proposed, which combines the non-dominated sorting genetic algorithm-II (NSGA-II) and a local search algorithm. The power dispatch sub-problem is solved by the weighed-sum lambda-iteration approach. The proposed method has been tested on systems composed by 10 and 100 generation units for a 24 hour demand horizon. The Pareto-optimal front obtained contains solutions of different trade off with respect to the two objectives of cost and emission, which are superior to those contained in the Pareto-front obtained by the pure NSGA-II. The solutions of minimum cost are shown to compare well with recent published results obtained by single-objective cost optimization algorithms.
We present an integral representation formula for a Dirichlet series whose coefficients are the values of the Liouville's arithmetic function.
When it is polarised, a cell develops an asymmetric distribution of specific molecular markers, cytoskeleton and cell membrane shape. Polarisation can occur spontaneously or be triggered by external signals, like gradients of signalling molecules... In this work, we use the published models of cell polarisation and we set a numerical analysis for these models. They are based on nonlinear convection-diffusion equations and the nonlinearity in the transport term expresses the positive loop between the level of protein concentration localised in a small area of the cell membrane and the number of new proteins that will be convected to the same area. We perform numerical simulations and we illustrate that these models are rich enough to describe the apparition of a polarisome.
A cell is polarised when it has developed a main axis of organisation through the reorganisation of its cytosqueleton and its intracellular organelles. Polarisation can occur spontaneously or be triggered by external signals, like gradients of signaling molecules ... In this work, we study mathematical models for cell polarisation. These models are based on nonlinear convection-diffusion equations. The nonlinearity in the transport term expresses the positive loop between the level of protein concentration localised in a small area of the cell membrane and the number of new proteins that will be convected to the same area. We perform numerical simulations and we illustrate that these models are rich enough to describe the apparition of a polarisome.
We study an XY-rotor model on regular one dimensional lattices by varying the number of neighbours. The parameter $2\ge\gamma\ge1$ is defined. $\gamma=2$ corresponds to mean field and $\gamma=1$ to nearest neighbours coupling. We find that for $\gamma<1.5$ the system does not exhibit a phase transition, while for $\gamma > 1.5$ the mean field second order transition is recovered. For the critical value $\gamma=\gamma_c=1.5$, the systems can be in a non trivial fluctuating phase for whichthe magnetisation shows important fluctuations in a given temperature range, implying an infinite susceptibility. For all values of $\gamma$ the magnetisation is computed analytically in the low temperatures range and the magnetised versus non-magnetised state which depends on the value of $\gamma$ is recovered, confirming the critical value $\gamma_{c}=1.5$.
Cylindrical pieces of wood which can be placed in 10mm o.d. NMR sample tubes have been impregnated by an aqueous solution of sodium nitrite (NaNO2). They were subsequently dried and examined by 14N quadrupole resonance. To our surprise, we were able to observe a signal at the right frequency (4.64 MHz, the highest frequency NaNO2 line). This implies that the material has been properly embedded but also that sodium nitrite was properly recrystallized. This latter point was more than speculative. Anyway, we could observe that it works for spruce, beech, ash, maritime pine, but not for oak (which is known to be difficult to impregnate). Instrumental parameters have been optimized for reducing the duration of the experiment. If the first measurements required around four hours, we obtain at the present time acceptable results in about four minutes or less. In addition, we have designed a simple NLLS (Non Linear Least Squares) algorithm by which the spectral parameters of the NQR signal can be retrieved even if the peak is hardly visible in the frequency domain. Extensive measurements have been performed on NaNO2 recrystallized in spruce. They show, among other things, that the totality of NaNO2 which has penetrated the material has actually recrystallized and seemingly for ever. Various assays have been carried out as a a function of impregnation and drying conditions. The evolution of the width of the NaNO2 line reflects defects in the crystal lattice of NaNO2 due probably to alteration of the wood structure.
Wood is enjoying increasing popularity in the building sector. In order to fully exploit the potential of this material, particularly in two and three-dimensional structures, improved knowledge of the mechanical behavior of the material and more complex constitutive models are required. We herein present a holistic approach to mechanical material modeling of wood, including a multitude of length scales as well as computational and experimental efforts. This allows to resolve the microstructural origin of the macro- scopic material behavior and to finally apply the gained knowledge to structural applications in a timber engineering framework. Focusing on elastoplasticity and viscoelasticity, exemplary results of the per- formed investigations are presented and their interrelations discussed. Regarding computational approaches, presented developments include multiscale models for prediction of elastic limit states and creep compliances of wood, macroscopic phenomenological models for wood plasticity and the time and moisture-dependent behavior, and their applications to investigations of dowel-joints and glued- laminated timber beams. Accompanying experiments provided additional input data for the computa- tional analyses, therewith completing the set of material properties predicted by the multiscale models. Moreover, they served as the reference basis for model validation at both the material and the structural scale.
Sampling protocols usually concern the way some kilograms of material are reduced to some grams with the same properties, but another protocol has to be considered: the choice of the samples to be used for estimating the resources of the deposit or some of its attributes. An important attribute is the metallurgical recovery, calculated with data sampling the deposit, on which laboratory tests are made to reproduce the metallurgical recovery process at a reduced scale. Such tests are very few because they are expensive; hence, the idea to combine them with exploratory data where the sole in situ grade is known using geostatistical techniques. While trying to put into practice this idea in a porphyry copper deposit located in the Chilean Central Andes, we encountered a surprising situation: laboratory tests and exploration measurements are supposed to use the same material but the total grades they measure do not have the same spatial variability. The paper presents the study and the impact of four causes. Spatial restriction: laboratory samples do not cover the same domain as exploration data. Regularization: laboratory and exploration samples do not have the same size. Sampling density: in the rich unit of the studied area, there are about two hundred laboratory samples and four thousand exploration ones. Grade selection: laboratory practice avoids high and low grades. The study shows that the major cause of the observed differences is the grade selection, but also that the number of laboratory tests is certainly too small with regards to the spatial variability of the grades. The consequence is that the sampling protocol for the metallurgical recovery tests shall be reconsidered if one wants to use them jointly with exploratory data.
This work concerns mineral deposits made of geological bodies such as breccias or lenses that contain several categories of grades with different characteristics in terms of distribution and variogram. When production blocks contain few such bodies, estimating block grades by ordinary kriging may produce unrealistic spatial continuity. We propose a method based on the indicators of objects (units or facies) together with their products with the grade. This is illustrated by an application to a porphyry copper deposit.
Dans un contexte de changement climatique, de nombreuses études portent sur l'observation de plusieurs paramètres physiques tels que la température du sol ou la végétation. L'humidité du sol fait partie des traceurs observés pour surveiller l'évolution du cycle hydrologique sur Terre. Parmi les satellites d'observation de la Terre, la première mission à avoir été spécialement conçue pour l'observation de l'humidité des sols à l'échelle globale a été la mission SMOS lancée le 2 Novembre 2009. La première partie de ma thèse a porté sur la validation du produit d'humidité SMOS. Pour l'échelle locale, une comparaison entre des humidités retrouvées à partir d'observations satellite (SMOS, AMSR-E et ASCAT), des humidités modélisées (ECMWF) et des humidités mesurées sur quatre bassins versants test aux Etats-Unis. Dans un second temps, le produit d'humidité SMOS est évalué à l'échelle globale en utilisant la triple collocation. La carte globale de l'erreur SMOS est ensuite reliée à différents autres paramètres en utilisant les méthodes ANOVA et CART pour en déterminer la cause. Le produit d'humidité SMOS a ensuite été replacé dans un contexte historique. Deux méthodes sont présentées afin de relier les humidités SMOS et AMSR-E sur les quatre sites test aux États-Unis : le CDF matching et les copules. La dernière partie reviendra sur l'algorithme de SMOS et sur ses innovations par rapport à ceux déjà existants. Pour cela, un algorithme plus simple est mis en place et adapté aux acquisitions multi-angulaires de SMOS. Une comparaison entre les humidités retrouvées avec ce modèle simplifié et celles de SMOS permet de mieux comprendre les améliorations que SMOS est capable de fournir.
Using pole decompositions as starting points, the one parameter (−1 ≤ c < 1) nonlocal and nonlinear Zhdanov-Trubnikov (ZT) equation for the steady shapes of premixed gaseous flames is studied in the large-wrinkle limit. The singular integral equations for pole densities are closely related to those satisfied by the spectral density in the O(n) matrix model, with n = −2(1+c)/( 1−c) . They can be solved via the introduction of complex resolvents and the use of complex analysis. We retrieve results obtained recently for −1 ≤ c ≤ 0, and we explain and cure their pathologies when they are continued naively to 0 < c < 1. Moreover, for any −1 ≤ c < 1, we derive closed-form expressions for the shapes of steady isolated flame crests, and then bicoalesced periodic fronts. These theoretical results fully agree with numerical resolutions. Open problems are evoked.
The complex interactions of localized vortices with waves is investigated using a model of point vortices in the presence of a transverse or longitudinal wave. This simple model shows a rich dynamical behavior including oscillations of a dipole, splitting and merging of two like-circulation vortices, and chaos. The analytical and numerical results of this model have been found to predict under certain conditions, the behavior of more complex systems, such as the vortices of the Charney-Hasegawa-Mima equation, where the presence of waves strongly affects the evolution of large coherent structures.
We study localized waves in chains of oscillators coupled by Hertzian interactions and trapped in local potentials. This problem is originally motivated by Newton's cradle, a mechanical system consisting of a chain of touching beads subject to gravity and attached to inelastic strings. We consider an unusual setting with local oscillations and collisions acting on similar time scales, a situation corresponding e.g. to a modified Newton's cradle with beads mounted on stiff cantilevers. Such systems support static and traveling breathers with unusual properties, including double exponential spatial decay, almost vanishing Peierls-Nabarro barrier and spontaneous direction-reversing motion. We prove analytically the existence of surface modes and static breathers for anharmonic on-site potentials and weak Hertzian interactions.
Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. Many conventional policy gradient methods use Monte-Carlo techniques to estimate this gradient. The policy is improved by adjusting the parameters in the direction of the gradient estimate. Since Monte-Carlo methods tend to have high variance, a large number of samples is required to attain accurate estimates, resulting in slow convergence. In this paper, we first propose a Bayesian framework for policy gradient, based on modeling the policy gradient as a Gaussian process. This reduces the number of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient as well as a measure of the uncertainty in the gradient estimates, namely, the gradient covariance, are provided at little extra cost. Since the proposed Bayesian framework considers system trajectories as its basic observable unit, it does not require the dynamic within each trajectory to be of any special form, and thus, can be easily extended to partially observable problems. On the downside, it cannot take advantage of the Markov property when the system is Markovian. To address this issue, we then extend our Bayesian policy gradient framework to actor-critic algorithms and present a new actor-critic learning model in which a Bayesian class of non-parametric critics, based on Gaussian process temporal difference learning, is used. Such critics model the action-value function as a Gaussian process, allowing Bayes' rule to be used in computing the posterior distribution over action-value functions, conditioned on the observed data. Appropriate choices of the policy parameterization and of the prior covariance (kernel) between action-values allow us to obtain closed-form expressions for the posterior distribution of the gradient of the expected return with respect to the policy parameters. We perform detailed experimental comparisons of the proposed Bayesian policy gradient and actor-critic algorithms with classic Monte-Carlo based policy gradient methods, as well as with each other, on a number of reinforcement learning problems.
Cette thèse est consacrée au bruit produit par un tapotement de planche de bord automobile, et à son influence sur la qualité perçue de la planche de bord. Elle est menée dans une démarche globale de conception centrée sur l'homme, dont l'objectif final est de considérer l'interaction entre l'utilisateur et le produit tout au long du processus de création ou d'amélioration d'un objet industriel. Dans un premier temps, une observation des actions et des perceptions du client en situation réaliste a été menée. Par l'étude des opérations effectuées et des verbalisations librement exprimées, il a été possible de proposer des catégories d'analyse pertinentes et de dresser un aperçu de la dynamique perceptive du client. En analysant plus finement les données reliées à la planche de bord, l'intérêt de travailler sur le bruit issu du tapotement a été démontré. Cette première phase a par ailleurs permis d'identifier la nature de l'évaluation subjective suscitée chez les clients par l'intermédiaire de ce bruit. Une deuxième expérience subjective a été réalisée, cette fois ci en laboratoire. Son objectif était de relier l'évaluation subjective à des qualités descriptives du son. Cela a été l'occasion de tester une méthodologie originale, basée sur le processus psychologique de catégorisation. Cette méthodologie a permis de décrire un vaste ensemble d'échantillons produits à partir d'une seule session de test sur un échantillon de sujets naïfs. Le traitement des données a notamment abouti à la création d'un espace perceptif qui synthétise les données complémentaires sur la perception sonore et constitue un outil graphique de type "cartographie des préférences". La recherche d'un indicateur de la qualité perçue mesurable sur le signal a alors été initiée. Tout d'abord, des métriques sonores corrélées aux coordonnées des sons pour chacun des axes de l'espace perceptif ont été proposées. La nature de ces métriques a été orientée par l'identification des qualités sonores descriptives représentant au mieux les axes perceptifs. A partir des métriques retenues, des modèles de régression ont alors été recherchés. Un modèle linéaire prédisant le score de qualité perçue est proposé. Un second modèle, obtenu par arbre de régression, prédit une catégorie évaluative directement associée à une action de conception. Pour finir, nous avons recherché les caractéristiques physiques déterminantes pour le critère perceptif identifié. Dans un premier temps, grâce à un réexamen des données perceptives, des facteurs technologiques ont été identifiés. Ensuite, nous avons voulu étudier l'influence de variables du matériau. Des sons de synthèse par un modèle physique simple de plaque ont donc été créés et évalués par le biais des modèles de prédiction de la qualité perçue.
The notion of differential privacy has emerged in the area of statistical databases as a measure of protection of the participants' sensitive information, which can be compromised by selected queries. Differential privacy is usually achieved by using mechanisms that add random noise to the query answer. Thus, privacy is obtained at the cost of reducing the accuracy, and therefore the "utility", of the answer. Since the utility depends on the user's side information, commonly modelled as a prior distribution, a natural goal is to design mechanisms that are optimal for every prior. However, it has been shown that such mechanisms do not exist for any query other than counting queries. Given the above negative result, in this paper we consider the problem of identifying a restricted class of priors for which an optimal mechanism does exist. Given an arbitrary query and a privacy parameter, we geometrically characterise a special region of priors as a convex polytope in the priors space. We then derive upper bounds for utility as well as for min-entropy leakage for the priors in this region. Finally we define what we call the "tight-constraints mechanism" and we discuss the conditions for its existence. This mechanism has the property of reaching the bounds for all the priors of the region, and thus it is optimal on the whole region.
Aleppo pine is the most widespread pine species around the Mediterranean Basin. Its post-fire recruitment has been studied in depth, but regeneration of mature stands in fire-free conditions has received considerably less attention. This study examines the impact of different site preparation treatments on pine recruitment using three experimental mature stands along a gradient of site fertility in southeastern France. The stands were partially felled and subjected to the following treatments replicated four times on each site: mechanical chopping (all sites), chopping followed by single soil scarification (all sites) or double scarification (2 sites), controlled fire of low intensity (2 sites) or of high intensity (1 site) and control (all sites). In addition, the influence of slash, either left on the soil or removed before treatments, was tested for the single scarification treatment on two of the sites. Pine regeneration was counted and soil cover conditions described at different time intervals: 1 to 6 years after the end of the treatments for two sites and 1 to 16 years for one site. Seedling dimensions were determined during the last count. Mean seedling densities after 6–9 years (0.57–1.06 pines/m2) were comparable to those found in post-fire conditions, although with a narrower range. Pine density was negligible in the control, while chopping followed by a single soil scarification emerged as the most favourable treatment tested in the three sites on seedling density (0.74–1.54 pines/m2 after 6–9 years) and seedling growth. For this treatment, the amount of slash had a contrasting influence on pine density according to site conditions. Double scarification did not affect pine density. Controlled high intensity fire, due to slash presence, was very favourable for pine regeneration (2.35 pines/m2), although this treatment was only tested at one site. Lastly, we found low pine densities in the chopping and low-intensity controlled fire treatments (0.20 to 0.56 pines/m2). Variation in herb cover was a major factor influencing pine recruitment. This study emphasises the need for adapted site preparation treatments to regenerate mature pine stands in southern Europe.
Sampling protocols usually concern the way some kilograms of material are reduced to some grams with the same properties, but another protocol has to be considered: the choice of the samples to be used for estimating the resources of the deposit or some of its attributes. An important attribute is the metallurgical recovery, calculated with data sampling the deposit, on which laboratory tests are made to reproduce the metallurgical recovery process at a reduced scale. Such tests are very few because they are expensive; hence, the idea to combine them with exploratory data where the sole in situ grade is known using geostatistical techniques. While trying to put into practice this idea in a porphyry copper deposit located in the Chilean Central Andes, we encountered a surprising situation: laboratory tests and exploration measurements are supposed to use the same material but the total grades they measure do not have the same spatial variability. The paper presents the study and the impact of four causes. Spatial restriction: laboratory samples do not cover the same domain as exploration data. Regularization: laboratory and exploration samples do not have the same size. Sampling density: in the rich unit of the studied area, there are about two hundred laboratory samples and four thousand exploration ones. Grade selection: laboratory practice avoids high and low grades. The study shows that the major cause of the observed differences is the grade selection, but also that the number of laboratory tests is certainly too small with regards to the spatial variability of the grades. The consequence is that the sampling protocol for the metallurgical recovery tests shall be reconsidered if one wants to use them jointly with exploratory data.
This work concerns mineral deposits made of geological bodies such as breccias or lenses that contain several categories of grades with different characteristics in terms of distribution and variogram. When production blocks contain few such bodies, estimating block grades by ordinary kriging may produce unrealistic spatial continuity. We propose a method based on the indicators of objects (units or facies) together with their products with the grade. This is illustrated by an application to a porphyry copper deposit.
Le chantier d'études sur les langages de la Révolution française, mis en place dans les années 1970 au sein du Laboratoire de lexicologie politique de l'ENS de Saint-Cloud, a pris son essor dans les années 1980, comme en témoigne la revue Mots. Il s'est amplifié sur une base méthodologique précise, l'analyse de discours et la lexicométrie en corpus. Mais il a pris aussi des directions nouvelles telles que l'histoire de la mémoire discursive et l'histoire des révolutions françaises. Enfin, récemment, de nouvelles ouvertures en histoire conceptuelle du discours d'une part, et en histoire du changement linguistique d'autre part, se sont concrétisées sur la base d'un corpus de textes d'écrivains et journalistes révolutionnaires remarqueurs de la nouvelle langue politique, alors que l'édition de corpus numériques reprend de l'ampleur à l'initiative de chercheurs étrangers.
La recherche francophone " suit-elle " la recherche anglophone ou bien s'en différencie-t-elle ? Concernant le champ des Systèmes d'Information auquel cet article est consacré, les travaux anglophones et francophones s'intéressent-ils aux mêmes problématiques, aux mêmes domaines d'application, choisissent-ils les mêmes niveaux d'analyse, se réfèrent-ils aux mêmes épistémologies, utilisent-ils les mêmes méthodologies ? On rappelle d'abord dans une première partie les résultats généraux obtenus sur 25 ans de littérature en S.I.. Une deuxième partie présente alors la comparaison de 763 articles de recherche sur une période commune de 15 ans : une perspective " d'ingénierie " pour les anglophones, et des thèmes liés à " l'animation des S.I. " pour les francophones. Une troisième partie présente ensuite les évolutions historiques, en séparant les deux échantillons : on peut certes parler " d'autonomie " de chaque communauté (une évolution vers la stratégie, ou une évolution vers l'animation des S.I...), mais sûrement pas de " suivisme " systématique.
Cet article propose une réflexion sur la coexistence du salariat et du bénévolat dans les associations. Quelle division du travail, quelle définition des fonctions s'établissent entre les bénévoles et les salarié-e-s ? Quels conflits y voient le jour et que traduisent-ils des brouillages de frontières et des décalages dans les attentes vis-à-vis des uns et des autres ? Que nous apprennent ces éléments des processus de professionnalisation1 à l'œuvre dans ces organisations ? Nous nous appuierons principalement sur les cas de deux secteurs qui se distinguent nettement par leur degré d'avancement dans le processus de professionnalisation et quant à leur recours au travail salarié et au travail bénévole : les services aux personnes et l'environnement (encadrés 1 et 2). Historiquement, le milieu associatif est d'abord le lieu de l'engagement bénévole. Ce n'est qu'après un certain temps d'existence, de formalisation, voire d'institutionnalisation des organisations, que le salariat y apparaît, sans supprimer la présence bénévole, ne serait-ce qu'au sein des instances dirigeantes, par définition bénévoles.
This paper presents a similarity-based approach for prognostics of the Remaining Useful Life (RUL) of a system, i.e. the lifetime remaining between the present and the instance when the system can no longer perform its function. Data from failure dynamic scenarios of the system are used to create a library of reference trajectory patterns to failure. Given a failure scenario developing in the system, the remaining time before failure is predicted by comparing by fuzzy similarity analysis its evolution data to the reference trajectory patterns and aggregating their times to failure in a weighted sum which accounts for their similarity to the developing pattern. The prediction on the failure time is dynamically updated as time goes by and measurements of signals representative of the system state are collected. The approach allows for the on-line estimation of the RUL. For illustration, a case study is considered regarding the estimation of RUL in failure scenarios of the Lead Bismuth Eutectic eXperimental Accelerator Driven System (LBE-XADS)
This paper addresses the optimization of protection strategies in critical infrastructures within a complex network systems perspective. The focus is on cascading failures triggered by the intentional removal of a single network component. Three different protection strategies are proposed that minimize the consequences of cascading failures on the entire system, on predetermined areas or on both scales of protective intervention in a multi-objective optimization framework. We optimize the three protection strategies by devising a modified binary differential evolution scheme that overcomes the combinatorial complexity of this optimization problem. We exemplify our methodology with reference to the topology of an electricity infrastructure, i.e. the 380 kV Italian power transmission network. We only focus on the structure of this network as a test case for the suggested protection strategies, with no further reference on its physical and electrical properties.
A multi-objective power unit commitment problem is framed to consider simultaneously the objectives of minimizing the operation cost and minimizing the emissions from the generation units. To find the solution of the optimal schedule of the generation units, a memetic evolutionary algorithm is proposed, which combines the non-dominated sorting genetic algorithm-II (NSGA-II) and a local search algorithm. The power dispatch sub-problem is solved by the weighed-sum lambda-iteration approach. The proposed method has been tested on systems composed by 10 and 100 generation units for a 24 hour demand horizon. The Pareto-optimal front obtained contains solutions of different trade off with respect to the two objectives of cost and emission, which are superior to those contained in the Pareto-front obtained by the pure NSGA-II. The solutions of minimum cost are shown to compare well with recent published results obtained by single-objective cost optimization algorithms.
Glycogen synthase kinase-3 (GSK3) has been implicated in major neurological disorders, but its role in normal neuronal function is largely unknown. Here we show that GSK3beta mediates an interaction between two major forms of synaptic plasticity in the brain, N-methyl-D-aspartate (NMDA) receptor-dependent long-term potentiation (LTP) and NMDA receptor-dependent long-term depression (LTD). In rat hippocampal slices, GSK3beta inhibitors block the induction of LTD. Furthermore, the activity of GSK3beta is enhanced during LTD via activation of PP1. Conversely, following the induction of LTP, there is inhibition of GSK3beta activity. This regulation of GSK3beta during LTP involves activation of NMDA receptors and the PI3K-Akt pathway and disrupts the ability of synapses to undergo LTD for up to 1 hr. We conclude that the regulation of GSK3beta activity provides a powerful mechanism to preserve information encoded during LTP from erasure by subsequent LTD, perhaps thereby permitting the initial consolidation of learnt information.
Glycogen synthase kinase-3 (GSK-3), an important component of the glycogen metabolism pathway, is highly expressed in the CNS. It has been implicated in major neurological disorders including Alzheimer's disease, schizophrenia and bipolar disorders. Despite its central role in these conditions it was not known until recently whether GSK-3 has neuronal-specific functions under normal conditions. However recent work has shown that GSK-3 is involved in the regulation of, and cross-talk between, two major forms of synaptic plasticity, N-methyl-D-aspartate receptor (NMDAR)-dependent long-term potentiation (LTP) and NMDAR-dependent long-term depression (LTD). The present article summarizes this recent work and discusses its potential relevance to the treatment of neurological disorders.
The neuropeptide somatostatin has been suggested to play an important role during neuronal development in addition to its established modulatory impact on neuroendocrine, motor and cognitive functions in adults. Although six somatostatin G protein-coupled receptors have been discovered, little is known about their distribution and function in the developing mammalian brain. In this study, we have first characterized the developmental expression of the somatostatin receptor sst2A, the subtype found most prominently in the adult rat and human nervous system. In the rat, the sst2A receptor expression appears as early as E12 and is restricted to post-mitotic neuronal populations leaving the ventricular zone. From E12 on, migrating neuronal populations immunopositive for the receptor were observed in numerous developing regions including the cerebral cortex, hippocampus and ganglionic eminences. Intense but transient immunoreactive signals were detected in the deep part of the external granular layer of the cerebellum, the rostral migratory stream and in tyrosine hydroxylase- and serotonin- positive neurons and axons. Activation of the sst2A receptor in vitro in rat cerebellar microexplants and primary hippocampal neurons revealed stimulatory effects on neuronal migration and axonal growth, respectively. In the human cortex, receptor immunoreactivity was located in the preplate at early development stages (8 gestational weeks) and was enriched to the outer part of the germinal zone at later stages. In the cerebellum, the deep part of the external granular layer was strongly immunoreactive at 19 gestational weeks, similar to the finding in rodents. In addition, migrating granule cells in the internal granular layer were also receptor-positive. Together, theses results strongly suggest that the somatostatin sst2A receptor participates in the development and maturation of specific neuronal populations during rat and human brain ontogenesis.
In apprenticeship learning we aim to learn a good policy by observing the behavior of an expert or a set of experts. In particular, we consider the case where the expert acts so as to maximize an unknown reward function defined as a linear combination of a set of state features. In this paper, we consider the setting where we observe many sample trajectories (i.e., sequences of states) but only one or a few of them are labeled as experts' trajectories. We investigate the conditions under which the remaining unlabeled trajectories can help in learning a policy with a good performance. In particular, we define an extension to the max-margin inverse reinforcement learning proposed by Abbeel and Ng (2004) where, at each iteration, the max-margin optimization step is replaced by a semi-supervised optimization problem which favors classifiers separating clusters of trajectories. Finally, we report empirical results on two grid-world domains showing that the semi-supervised algorithm is able to output a better policy in fewer iterations than the related algorithm that does not take the unlabeled trajectories into account.
Poisson processes are used in various application fields applications (public health biology, reliability and so on). In their homogeneous version, the intensity process is a deterministic constant. In their inhomogeneous version, it depends on time. To allow for an endogenous evolution of the intensity process we consider multiplicative intensity processes. Inference methods have been developed when the trajectories are fully observed. We deal with the case of a partially observed process. As a motivating example, consider the analysis of an electrical network through time. This network is composed of cables and accessories (joints). When a cable fails, the cable is replaced by a new cable connected to the network by two new accessories. When an accessory fails, the same kind of reparation is done leading to the addition of only one accessory. The failure rate depends on the stochastically evolving number of accessories. We only observe the times events; the initial number of accessories and the cause of the incident (cable or accessory) are only partially observed. The aim is to estimate the different failure rates or to make predictions. The inference is strongly influenced by the initial number of accessories, which is typically an unknown quantity. We deduce a sensible prior on the initial number of accessories using the probabilistic properties of the process . We illustrate the performances of our methodology on a large simulation study.
Sampling protocols usually concern the way some kilograms of material are reduced to some grams with the same properties, but another protocol has to be considered: the choice of the samples to be used for estimating the resources of the deposit or some of its attributes. An important attribute is the metallurgical recovery, calculated with data sampling the deposit, on which laboratory tests are made to reproduce the metallurgical recovery process at a reduced scale. Such tests are very few because they are expensive; hence, the idea to combine them with exploratory data where the sole in situ grade is known using geostatistical techniques. While trying to put into practice this idea in a porphyry copper deposit located in the Chilean Central Andes, we encountered a surprising situation: laboratory tests and exploration measurements are supposed to use the same material but the total grades they measure do not have the same spatial variability. The paper presents the study and the impact of four causes. Spatial restriction: laboratory samples do not cover the same domain as exploration data. Regularization: laboratory and exploration samples do not have the same size. Sampling density: in the rich unit of the studied area, there are about two hundred laboratory samples and four thousand exploration ones. Grade selection: laboratory practice avoids high and low grades. The study shows that the major cause of the observed differences is the grade selection, but also that the number of laboratory tests is certainly too small with regards to the spatial variability of the grades. The consequence is that the sampling protocol for the metallurgical recovery tests shall be reconsidered if one wants to use them jointly with exploratory data.
À l'attention du déposant
Le dépôt doit être effectué en accord avec les co-auteurs et dans le respect de la politique des éditeurs
La mise en ligne est assujettie à une modération, la direction de HAL se réservant le droit de refuser les articles ne correspondant pas aux critères de l'archive (voir le guide du déposant )
Tout dépôt est définitif, aucun retrait ne sera effectué après la mise en ligne de l'article
Consulter le ManuHAL
Les fichiers textes au format pdf ou les fichiers images composant votre dépôt sont maintenant envoyés au CINES dans un contexte d'archivage à long terme.
À l'attention des lecteurs
Dans un contexte de diffusion électronique, tout auteur conserve ses droits intellectuels, notamment le fait de devoir être correctement cité et reconnu comme l'auteur d'un document.
Déposer
MedOANet a comme objectif de coordonner, à un niveau régional et national, les stratégies, politiques et structures pour le Libre accès dans six pays méditerranéens.
Sa première action est le lancement de trois enquêtes destinées à établir une carte de l'écosystème méditerranéen du Libre accès.
Votre contribution est importante .
Depuis le 27 février, le compilateur TeX disponible sur HAL est celui de TexLive2011.
Un nouveau module permettant de faire une page WEB "formatée à son goût" avec un ensemble de publications sélectionnées grâce à des critères de recherche (un chercheur, un laboratoire, une collection...) a été intégré dans HAL.