Koskinen T.T.,University of Arizona |
Harris M.J.,University College London |
Yelle R.V.,University of Arizona |
Lavvas P.,CNRS Molecular and Atmospheric Spectrometry Group
Icarus | Year: 2013
The detections of atomic hydrogen, heavy atoms and ions surrounding the extrasolar giant planet (EGP) HD209458b constrain the composition, temperature and density profiles in its upper atmosphere. Thus the observations provide guidance for models that have so far predicted a range of possible conditions. We present the first hydrodynamic escape model for the upper atmosphere that includes all of the detected species in order to explain their presence at high altitudes, and to further constrain the temperature and velocity profiles. This model calculates the stellar heating rates based on recent estimates of photoelectron heating efficiencies, and includes the photochemistry of heavy atoms and ions in addition to hydrogen and helium. The composition at the lower boundary of the escape model is constrained by a full photochemical model of the lower atmosphere. We confirm that molecules dissociate near the 1μbar level, and find that complex molecular chemistry does not need to be included above this level. We also confirm that diffusive separation of the detected species does not occur because the heavy atoms and ions collide frequently with the rapidly escaping H and H+. This means that the abundance of the heavy atoms and ions in the thermosphere simply depends on the elemental abundances and ionization rates. We show that, as expected, H and O remain mostly neutral up to at least 3Rp, whereas both C and Si are mostly ionized at significantly lower altitudes. We also explore the temperature and velocity profiles, and find that the outflow speed and the temperature gradients depend strongly on the assumed heating efficiencies. Our models predict an upper limit of 8000K for the mean (pressure averaged) temperature below 3Rp, with a typical value of 7000K based on the average solar XUV flux at 0.047AU. We use these temperature limits and the observations to evaluate the role of stellar energy in heating the upper atmosphere. © 2012 Elsevier Inc.
Liger-Belair G.,CNRS Molecular and Atmospheric Spectrometry Group |
Liger-Belair G.,University Of Reims
European Physical Journal: Special Topics | Year: 2012
Bubbles in a glass of champagne may seem like the acme of frivolity to most of people, but in fact they may rather be considered as a fantastic playground for any physicist. Actually, the so-called effervescence process, which enlivens champagne and sparkling wines tasting, is the result of the fine interplay between CO 2 dissolved gas molecules, tiny air pockets trapped within microscopic particles during the pouring process, and some both glass and liquid properties. Results obtained concerning the various steps where the CO 2 molecule plays a role (from its ingestion in the liquid phase during the fermentation process to its progressive release in the headspace above the tasting glass as bubbles collapse) are gathered and synthesized to propose a self-consistent and global overview of how gaseous and dissolved CO 2 impact champagne and sparkling wine science. Physicochemical processes behind the nucleation, rise, and burst of gaseous CO 2 bubbles found in glasses poured with champagne and sparkling wines are depicted. Those phenomena observed in close-up through high-speed photography are often visually appealing. I hope that your enjoyment of champagne will be enhanced after reading this fully illustrated review dedicated to the science hidden right under your nose each time you enjoy a glass of champagne. © 2012 EDP Sciences and Springer.
Rey M.,CNRS Molecular and Atmospheric Spectrometry Group |
Nikitin A.V.,Tomsk State University |
Tyuterev V.G.,CNRS Molecular and Atmospheric Spectrometry Group
Astrophysical Journal | Year: 2014
The paper describes the construction of complete sets of hot methane lines based on accurate ab initio potential and dipole moment surfaces and extensive first-principle calculations. Four line lists spanning the [0-5000] cm -1 infrared region were built at T = 500, 1000, 1500, and 2000 K. For each of these four temperatures, we have constructed two versions of line lists: a version for high-resolution applications containing strong and medium lines and a full version appropriate for low-resolution opacity calculations. A comparison with available empirical databases is discussed in detail for both cold and hot bands giving a very good agreement for line positions, typically <0.1-0.5 cm-1 and 5% for intensities of strong lines. Together with numerical tests using various basis sets, this confirms the computational convergence of our results for the most important lines, which is the major issue for theoretical spectra predictions. We showed that transitions with lower state energies up to 14,000 cm-1 could give significant contributions to the methane opacity and have to be systematically taken into account. Our list at 2000 K calculated up to J = 50 contains 11.5 billion transitions for I > 10-29 cm mol-1. These new lists are expected to be quantitatively accurate with respect to the precision of available and currently planned observations of astrophysical objects with improved spectral resolution. © 2014. The American Astronomical Society. All rights reserved.
Cordier D.,CNRS Molecular and Atmospheric Spectrometry Group
Monthly Notices of the Royal Astronomical Society | Year: 2016
The hydrocarbon seas of Titan, discovered by the Cassini/Huygens mission are among the most mysterious and interesting features of this moon. In the future, a possible dedicated planetary probe will certainly measure the speed of sound in this cryogenic liquid, as was planned in the case of Huygens landing in a sea. Previous theoretical studies of such acoustic measurements were based on simple models, leading in some cases to unphysical situations. Employed in a vast body of chemical engineering works, the state-of-the-art perturbed-chain statistical associating fluid theory (PC-SAFT) model has been recently introduced in studies aimed at Titan. Here, I revisit the issue of the speed of sound in Titan's liquids, in light of this theory. I describe, in detail, the derivation of the speed of sound from the chosen equation of state and the potential limitations of the approach. To make estimations of the composition of a ternary liquid mixture N2:CH4:C2H6 from speed-of-sound measurements an original inversion algorithm is proposed. It is shown that 50 measures between 90 and 100 K are enough to ensure an accuracy of the derived compositions of better than 10 per cent. The influence of the possible presence of propane is also investigated. © 2016 The Author.
Nikitin A.V.,CNRS Molecular and Atmospheric Spectrometry Group |
Rey M.,CNRS Molecular and Atmospheric Spectrometry Group |
Tyuterev V.G.,CNRS Molecular and Atmospheric Spectrometry Group
Chemical Physics Letters | Year: 2013
New dipole moment surfaces (DMS) of methane are constructed using extended ab initio CCSD(T) calculations at 19 882 nuclear configurations. The DMS analytical representation is determined through an expansion in symmetry adapted products of internal nonlinear coordinates involving 967 parameters up to the 6th order. Integrated intensities of seven lower polyads up to J = 30 for 12CH4 and 13CH4 are in a good agreement with the HITRAN 2008 database, and with other available experimental data. © 2013 Elsevier B.V. All rights reserved.
Bonhommeau D.A.,CNRS Molecular and Atmospheric Spectrometry Group
Computer Physics Communications | Year: 2015
This new version of the MCMC2 program for modeling the thermodynamic and structural properties of multiply-charged clusters fixes some minor bugs present in earlier versions. A figure representing the required RAM per replica as a function of the cluster size (N&20000) is also provided as benchmark. New version program summary Program title:MCMC Catalogue identifier: AENZ-v1-2 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENZ-v1-2.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 143653 No. of bytes in distributed program, including test data, etc.: 1396311 Distribution format: tar.gz Programming language: Fortran 90 with MPI extensions for parallelization. Computer: x86 and IBM platforms. Operating system:CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2;CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95 /pgf90 + MPICH2;Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI;IBM Power 6 4.7 GHz, xlf + PESSL (IBM parallel library).Has the code been vectorized or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: Up to 999 RAM: (per CPU core) 10-20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small or medium-sized clusters (N;300-400). The size of An+N clusters (N= number of particles, n= number of charged particles with n&N) should not exceed 1.6 (respectively 2.0) particles on servers with 2 GB (respectively 3 GB) of RAM per CPU core if n=0 (neutral clusters) or n=N ("fully-charged" clusters). For charged clusters composed of neutral and charged particles (e.g.; n=N/2), the maximum cluster size can drop to 1.4 and 1.8 on servers with 2 GB and 3 GB of RAM, respectively (see the figure given in Supplementary Material). Supplementary material: A figure showing the amount of RAM required per replica as a function of the size of A;bsubn+N clusters can be downloaded. Supplementary material related to this article can be found online at http://dx.doi.org/10.1016/j.cpc.2015.06.017. The following is the Supplementary material related to this article. Amount of RAM required per replica (in GB) as a function of the cluster size. The calculations have been performed without taking into account polarization Catalogue identifier of previous version: AENZ-v1-1 Journal reference of previous version: Comput. Phys. Comm. 185(2014)1188 Classification: 23. Does the new version supersede the previous version?: Yes Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply charged clusters. Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs. Reasons for new version: This new version of the MCMC2 program for modeling the thermodynamic and structural properties of multiply-charged clusters fixes some minor bugs present in earlier versions. A figure representing the required RAM per replica as a function of the cluster size (N&20000) is also provided as benchmark. Summary of revisions:Additional features of MCMC version 1.1.1: Same as in the previous version;Modifications or corrections to MCMC version 1.1 [2,3] Several minor bugs were fixed in this versioni. A default value for the integer "irand", used to select the type of random number generator (keyword SEED, subkeyword METHOD), was missing. It is set to 0.ii. The subkeyword "EVERY" used to define the frequency of statistics printing (keyword "STATISTICS") was missing and it has been implemented in the program. Before version 1.1.1, the choice entered into the setup file was simply ignored and the frequency was always set to its default value, namely a printing every 100 Monte Carlo sweeps.Some useless integers are removed from subroutines in lib4-pol.f90 and lib4-dampol.f90 and some test runs are slightly modified. In particular, in test run 2, the particle and probe diameters used to evaluate the number of surface particles were fixed to 0.8 and 1.2, respectively (see keyword "SURFACE"). Actually, the probe diameter should be smaller than the particle diameter  and the two values were therefore swapped.The subroutines dLJ-nopol-hom (in lib4-nopol.f90), dLJ-pol-hom (in lib4-pol.f90), and dLJ-dampol-hom (in lib4-dampol.f90) are renamed dLJ-nopol, dLJ-pol, and dLJ-dampol, respectively, to avoid any ambiguity. The suffix "Hom", that stood for "homogeneity" in order to indicate that Lennard-Jones interactions between particles were the same, was improper since homogeneity is commonly related to invariance by translation and all the properties of multiply charged clusters cannot be considered invariant by translation in the most general case. The renaming of the three subroutines has obviously no influence on the results and some related comments have been modified accordingly. Restrictions: The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge-charge, charge-induced dipole, induced dipole-induced dipole, polarization). Furthermore, the Monte Carlo simulations can only be performed in the NVT ensemble and the size of charged clusters should not exceed 2.0 particles on CPU cores with less than 3GB of RAM each. It is worth noting that the latter restriction is not significantly crippling since MCMC should be mainly devoted to the investigation of medium-sized cluster properties due to the difficulty to converge Monte Carlo simulations on large systems (N≥10) . Unusual features: The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated. Running time: The running time depends on the number of Monte Carlo steps, cluster size, and the type of interactions selected (e.g.; polarization turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters (N;100, not including polarization interactions), to one week for large clusters (N≥1000 not including polarization interactions), and several weeks for large clusters (N≥1000) when including polarization interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase. References:E. Pahl, F. Calvo, L. Koci, P. Schwerdtfeger, Accurate Melting Temperatures for Neon and Argon from Ab Initio Monte Carlo Simulations, Angew. Chem. Int. Ed. 47 (2008) 8207-8210.D.A. Bonhommeau, M.-P. Gaigeot, MCMC2: A Monte Carlo code for multiply-charged clusters, Comput. Phys. Commun. 184 (2013) 873-884.D.A. Bonhommeau, M. Lewerenz, M.-P. Gaigeot, MCMC2 (version 1.1): A Monte Carlo code for multiply-charged clusters, Comput. Phys. Commun. 185 (2014) 1188-1191.M.A. Miller, D.A. Bonhommeau, C.J. Heard, Y. Shin, R. Spezia, M.-P. Gaigeot, Structure and stability of charged clusters, J. Phys.: Condens. Matter. 24 (2012) 284130. © 2015 Elsevier B.V.
Bonhommeau D.A.,CNRS Molecular and Atmospheric Spectrometry Group
Computer Physics Communications | Year: 2016
This version of the MCMC2 program corrects several glitches of earlier versions and adapts the keyword environment of radial, angular, and evaporation histograms to remove some useless columns in the data files or simply prevent their creation, when desired. Although most of the fixed bugs only resulted in warnings at compiling, one of them, related to an incorrect array reinitialization, was identified to be the main source of instabilities of earlier versions of the program when compiled with the ifort compiler. The setup keywords altered to take into account the aforementioned corrections and improvements are detailed at the end of the manuscript. New version program summary Program title: MCMC2. Program Files doi: 10.17632/yr35brx38n.1 Licensing provisions: GNU General Public License (GPL) version 3. Programming language: fortran 90 with MPI extensions for parallelization. Journal reference of previous version: Comput. Phys. Comm. 196 (2015) 614 Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply charged clusters. Solution method: Same as in the previous version. Reasons for the new version: Several glitches that may result in warnings at compiling or instabilities at the execution stage have been corrected. In particular, one of these glitches related to an incorrect array reinitialization might lead to crashes of the program. This bug is deemed to be the source of the instabilities denoted in earlier versions of the code when they were compiled with the ifort compiler. We also take advantage of this new version to improve the definition of the keywords devoted to the generation of histograms and we correct some comments throughout the program files. Summary of revisions 1. Additional features of MCMC2 version 1.1.2: Same as in the previous version.2. Modifications or corrections to MCMC2 version 1.1.1: (a) Major glitch correction. One major glitch related to an improper reinitialization of the logical array lpsim_swap devoted to the storage of rejection and acceptance decisions after Parallel Tempering (PT) or Parallel Charging (PC) configuration swap attempts has been corrected. In some cases, especially for the replica of higher number, the program attempted to reinitialize a component of the array lpsim_swap that exceeded the array dimension which might cause memory errors. We performed several simulations of 1010 Monte Carlo (MC) steps on small clusters (100 particles) and large clusters (1000 particles) to ensure that the program was stable when compiling the code with several versions of the gfortran and ifort compilers. Former instabilities of the MC simulations attributed to possible issues with the ifort compilers  were probably a manifestation of this glitch. Amended subroutines: simulate_nopol in lib4-nopol.f90, simulate_pol in lib4-pol.f90, and simulate_dampol in lib4-dampol.f90.(b) Correction of minor glitches. Recent gfortran and ifort compilers might generate some warnings at compiling MCMC2 version 1.1.1. Several subroutines have been slightly modified to remove these warnings. i. Alterations in lib1.f90: The integer iout is no longer defined as a parameter in the subroutine vranf (main routine of the lagged Fibonacci random number generator). It is also declared as INTENT(INOUT) instead of INTENT(IN) in the subroutines xuinit and xuwarm.ii. Alterations in lib4-nopol.f90: A dummy value “0” used in the call of the subroutine dLJ_nopol is replaced by an array called dummy (of dimension N_part and declared as real) in the subroutine simulate_nopol.iii. Alterations in lib4-pol.f90 and lib4-dampol.f90: A dummy value “0” used in the call of the subroutine dLJ_pol (respectively dLJ_dampol) is replaced by an array called dummy (of dimension N_part and declared as real) in the subroutine simulate_pol (respectively simulate_dampol). The arrays dr (of type vector and dimension N_part×N_part) and dist (of type real and dimension N_part×N_part) are declared as INTENT(INOUT) instead of INTENT(IN) in the subroutine LJ_pol2 (respectively LJ_dampol2).(c) Other minor corrections. i. The arrays racc and rrej used to evaluate the acceptance and rejection rates on each replica after PT or PC configuration swap attempts are converted into double precision reals since only one component of these arrays needs to be known per replica. This should save a small amount of memory. Amended subroutines: simulate_nopol in lib4-nopol.f90, simulate_pol in lib4-pol.f90, simulate_dampol in lib4-dampol.f90, ptswap, pcswap_nopol, pcswap_pol, pcswap_dampol, ptswap2, pcswap2_nopol, pcswap2_pol, and pcswap2_dampol in lib5.f90.ii. A comment in the main file (main.f90) was incomplete: The second stage of the equilibration phase can be PT or PC depending on the setup file entries, and not PT only.iii. The dimension of revap, array that stores the number of evaporated particles, was incorrectly fixed to 3 instead of 5 in printin_stats_nopol (lib3-nopol.f90) which altered the intermediate files created at the end of each stage of a split simulation  involving charged particles.iv. The dimension of N_pol and N_nopol arrays is set to 4 instead of 2 in printout_evap_pol (lib3-pol.f90) to comply with their initial declaration in the module config (lib2.f90) although the two last components of these vectors are not currently used in printout_evap_pol.(d) Improvement of the evaporation histograms. i. A logical levap is added to enable the user not to generate the evaporation histograms.ii. A logical levap2 is added to enable the user to reinitialize to zero the evaporation histograms during split simulations.iii. Some comments are now printed in the output file to indicate whether the generation of evaporation histograms is deactivated.Amended subroutines and modules: run_params in lib2.f90, read_setup in read_setup.f90, evap_nopol in lib3-nopol.f90, evap_pol in lib3-pol.f90, simulate_nopol in lib4-nopol.f90, simulate_pol in lib4-pol.f90, and simulate_dampol in lib4-dampol.f90.(e) Improvement of the angular distributions. The angular distributions represent the angles between the radius vectors of charge carriers. In earlier versions of the code the angular distributions were systematically evaluated for all the charge carriers within the cluster and for the charge carriers located at the cluster surface, whatever the cluster composition. In particular, the angular distribution files were filled with zeros when the cluster was neutral. In this case, the calculation of the angular distributions is now automatically deactivated by setting the input parameter cos_every to zero. A warning is added in the output file to inform the user of this reinitialization. Amended routines and subroutines: main program MCMC2 in main.f90, print_surf, printin_surf and readin_surf in lib3.f90, histo_surf_nopol in lib3-nopol.f90, histo_surf_pol in lib3-pol.f90, simulate_nopol in lib4-nopol.f90, simulate_pol in lib4-pol.f90, and simulate_dampol in lib4-dampol.f90.(f) Improvement of the radial histograms. i. We remove the possibility to set radtyp=0, where radtyp is the parameter to select the type of radial histograms to be computed. This parameterization was redundant with the subkeyword USE of keyword RADIAL that was introduced in version 1.1  to enable the deactivation of the calculation of radial histograms. The remaining integral values allowed for radtyp, namely 1−4, are subsequently shifted to 0−3.ii. radtyp=1 (i.e., generation of radial distributions for charged particles) is forbidden when the cluster is neutral. radtyp is then reinitialized to zero (default value).iii. radtyp=2 (i.e., generation of radial distributions for neutral particles) is forbidden when all the particles are charged. radtyp is then reinitialized to zero.iv. radtyp=3 (i.e., generation of radial distributions for all the particles, for charged particles, and for neutral particles) is forbidden when the cluster is neutral, only composed of positively charged particles, or only composed of negatively charged particles. In these three cases radtyp is reinitialized to zero.These reinitializations of radtyp aimed to prevent the production of useless columns in the data files. Their omission in previous versions of the MCMC2 program did not affect the accuracy of the MC results. Amended routines and subroutines: main program MCMC2 in main.f90, read_setup in read_setup.f90, inhisto inhisto_splitm, normhisto, printin_histo, readin_histo in lib3.f90, calhisto_nopol in lib4-nopol.f90, calhisto_pol in lib4-pol.f90(g) Modified comments. Some comments and column headers of the data files have been corrected.3. Modified setup keywords • RADIAL USE lrad GRIDRCOM deltagridstepgrid PARTICLE radtyp: Setting parameters for generating one-particle radial histograms. lrad indicates whether the user wants (lrad=.true.) or does not want (lrad=.false.) the calculation of radial histograms. These histograms cannot extend beyond the radius of the Monte Carlo container but, for graphical purposes, the user is allowed to add a small distance deltagrid to the grid size (=radius+deltagrid). stepgrid is the grid step for one-particle radial histograms with respect to the cluster center of mass. The grid origin is hardcoded to zero since these histograms are calculated with respect to the cluster center of mass. The number Ngrid of grid points is automatically determined in the code from the grid size and the grid step. radtyp enables the user to specify the type of particles to be considered in the radial histograms (0=all the particles without any distinction, 1=charged particles only, 2=neutral particles only, 3=all the previous histograms (3=0+1+2)). In case of inconsistency between the input configurations and the value of radtyp, this parameter is reinitialized to zero. Default: lrad=.false., deltagrid=0, stepgrid=0.1, radtyp=0.• EVAPORATION USE levap PARTNB Npart_evap GRIDNB Nsamp_evap: Setting of parameters for defining the evaporation criterion and related histograms. levap indicates whether the user wants (levap=.true.) or does not want (levap=.false.) the calculation of evaporation histograms. Npart_evap defines the number of particles above which evaporation is assumed to be prominent during a MC sweep. Nsamp_evap is the bin size (in number of particles) selected for plotting the distribution of evaporated particles. Default: levap=.false., Npart_evap=1 and Nsamp_evap=1.• SPLITSIM USE lsplitisplitnsplit ANALYSIS lsurflrad2langlevap2: Setting of parameters for running split MC simulations. lsplit indicates whether the user wants (lsplit=.true.) or does not want (lsplit=.false.) to run a split simulation. Each split simulation is composed of several stages where isplit is the number of the current stage and nsplit is the total number of stages. lsurf, lrad2, lang, and levap2 enable the user to reset the arrays for surface particle distributions, radial histograms, angular distributions of charged particles, and evaporation histograms (lsurf=.true., lrad2=.true., lang=.true., and levap2=.true.) or to use intermediate files (i.e., output files from the previous stage) as starting point of the next stage (lsurf=.false., lrad2=.false., lang=.false., and levap2=.false.). Default: lsplit=.false., isplit=1, nsplit=2, lsurf=.false., lrad2=.false., lang=.false., levap2=.false. D. A. Bonhommeau, MCMC2 (version 1.1.1): A Monte Carlo code for multiply charged clusters, Comput. Phys. Commun. 196 (2015) 614–616. D. A. Bonhommeau, M.-P. Gaigeot, MCMC2: A Monte Carlo code for multiply-charged clusters, Comput. Phys. Commun. 184 (2013) 873–884. D. A. Bonhommeau, M. Lewerenz, M.-P. Gaigeot, MCMC2 (version 1.1): A Monte Carlo code for multiply-charged clusters, Comput. Phys. Commun. 185 (2014) 1188–1191. © 2016 Elsevier B.V.
Liger-Belair G.,CNRS Molecular and Atmospheric Spectrometry Group
Journal of Physical Chemistry B | Year: 2014
The issue about how many carbon dioxide bubbles are likely to nucleate in a glass of champagne (or bubbly) is of concern for sommeliers, wine journalists, experienced tasters, and any open minded physical chemist wondering about complex phenomena at play in a glass of bubbly. The whole number of bubbles likely to form in a single glass is the result of the fine interplay between dissolved CO2, tiny gas pockets trapped within particles acting as bubble nucleation sites, and ascending bubble dynamics. Based on theoretical models combining ascending bubble dynamics and mass transfer equations, the falsely naÏve question of how many bubbles are likely to form per glass is discussed in the present work. A theoretical relationship is derived, which provides the whole number of bubbles likely to form per glass, depending on various parameters of both the wine and the glass itself. © 2014 American Chemical Society.
Alijah A.,CNRS Molecular and Atmospheric Spectrometry Group
Journal of Molecular Spectroscopy | Year: 2010
The results of full variational calculations for highly excited rovibrational states of H3+ are presented. The computed data have been adjusted, by comparison with experiment, to account for the neglect of non-adiabatic coupling in the calculations and for inaccuracies of the potential energy surface. Most data have been assigned by spectroscopic quantum numbers. Detailed tables of term values for J≤2 are provided. Some hitherto unassigned experimental lines could be identified. © 2010 Elsevier Inc. All rights reserved.
Liger-Belair G.,CNRS Molecular and Atmospheric Spectrometry Group
Journal of Physical Chemistry B | Year: 2016
Under standard champagne tasting conditions, the complex interplay between the level of dissolved CO2 found in champagne, its temperature, the glass shape, and the bubbling rate definitely impacts champagne tasting by modifying the neuro-physicochemical mechanisms responsible for aroma release and flavor perception. On the basis of theoretical principles combining heterogeneous bubble nucleation, ascending bubble dynamics, and mass transfer equations, a global model is proposed, depending on various parameters of both the wine and the glass itself, which quantitatively provides the progressive losses of dissolved CO2 from laser-etched champagne glasses. The question of champagne temperature was closely examined, and its role on the modeled losses of dissolved CO2 was corroborated by a set of experimental data. © 2016 American Chemical Society.