Session: Chromatography and Mass Spectrometry: Anything New? III

Session Chair: Prof. Dr. Oliver Schmitz
English

Extending metabolomics by full-scale compound identification

Oliver Fiehn, UC Davis Genome Center
Standardized automated data processing is necessary to learn across multiple studies. Metabolomics has matured enough to be used in large scale biology, forming atlases of metabolomes of organs and species, and comparing metabolomic results across diseases and phenotypes. At the West Coast Metabolomics Center at UC Davis (WCMC), we have built a database environment over the years that not only serves in-house procedures but that enables high-end informatics processes for many clinical and pre-clinical studies. Untargeted metabolomics data at the WCMC are acquired using three assays: primary metabolites by GC-TOF MS (and GC-QTOF MS), biogenic amines by BEH-HILIC-accurate mass MS/MS, and complex lipids by CSH-accurate mass MS/MS. Data are locally converted to the unified mzXML format and pushed to the Amazon cloud service via a local S3 gateway. A suite of tools and databases support further data analysis. Tools use Java, Python and R programming environments, depending on data loads and other performance characteristics. The WCMC has published 19 different tools and databases to be used by the metabolomics community. We show how the data processing software MS-DIAL is used for both LC-MS/MS and GC-MS data from low- and high resolution mass spectrometer for instruments from all vendors. Over the past 15 years, the WCMC has amassed GC-TOF MS data for over 2,500 studies in more than 150,000 samples. These data can now be compared through the open access portal http://binvestigate.fiehnlab.ucdavis.edu to match both known and unknown peaks in GC-MS profiles directly through MS-DIAL software [1]. BinVestigate includes more than 7,500 unique peaks that detail abundance and frequence of both identified and unknown metabolites in more than 80 species and organs. We show how unknown compounds can be classified, annotated and structurally identified by using H/D exchange, NIST-MS hybrid search, MS-FINDER and MassFrontier spectra interpretation, and retention time prediction. Workflows have been adapted for both GC-MS and accurate mass LC-MS/MS data. Retention time, MS and MS/MS spectra were matched by the enlarged MassBank of North America database as well as NIST17 to include more than 600,000 mass spectra, including 50,000 new experimental spectra on authentic natural products. We have now re-written the MS-DIAL data processing software for cloud computing as part of an automated enterprise-grade MS data processing pipeline. Internal standards are used to align for retention time drifts, enabling comparison of both identified and unknown metabolites across different matrices. Using 50 AWS cloud compute nodes we analyzed 4,099 clinical plasma samples of a type 2 diabetes human cohort within hours. Systematic Error Removal by Random Forest (SERRF) enabled quantifications with less than 5% RSD. We give example results for clinical and pre-clinical studies to showcase the use of the overall WCMC cheminformatics environment.
English

Extending metabolomics by full-scale compound identification

Oliver Fiehn, UC Davis Genome Center
Standardized automated data processing is necessary to learn across multiple studies. Metabolomics has matured enough to be used in large scale biology, forming atlases of metabolomes of organs and species, and comparing metabolomic results across diseases and phenotypes. At the West Coast Metabolomics Center at UC Davis (WCMC), we have built a database environment over the years that not only serves in-house procedures but that enables high-end informatics processes for many clinical and pre-clinical studies. Untargeted metabolomics data at the WCMC are acquired using three assays: primary metabolites by GC-TOF MS (and GC-QTOF MS), biogenic amines by BEH-HILIC-accurate mass MS/MS, and complex lipids by CSH-accurate mass MS/MS. Data are locally converted to the unified mzXML format and pushed to the Amazon cloud service via a local S3 gateway. A suite of tools and databases support further data analysis. Tools use Java, Python and R programming environments, depending on data loads and other performance characteristics. The WCMC has published 19 different tools and databases to be used by the metabolomics community. We show how the data processing software MS-DIAL is used for both LC-MS/MS and GC-MS data from low- and high resolution mass spectrometer for instruments from all vendors. Over the past 15 years, the WCMC has amassed GC-TOF MS data for over 2,500 studies in more than 150,000 samples. These data can now be compared through the open access portal http://binvestigate.fiehnlab.ucdavis.edu to match both known and unknown peaks in GC-MS profiles directly through MS-DIAL software [1]. BinVestigate includes more than 7,500 unique peaks that detail abundance and frequence of both identified and unknown metabolites in more than 80 species and organs. We show how unknown compounds can be classified, annotated and structurally identified by using H/D exchange, NIST-MS hybrid search, MS-FINDER and MassFrontier spectra interpretation, and retention time prediction. Workflows have been adapted for both GC-MS and accurate mass LC-MS/MS data. Retention time, MS and MS/MS spectra were matched by the enlarged MassBank of North America database as well as NIST17 to include more than 600,000 mass spectra, including 50,000 new experimental spectra on authentic natural products. We have now re-written the MS-DIAL data processing software for cloud computing as part of an automated enterprise-grade MS data processing pipeline. Internal standards are used to align for retention time drifts, enabling comparison of both identified and unknown metabolites across different matrices. Using 50 AWS cloud compute nodes we analyzed 4,099 clinical plasma samples of a type 2 diabetes human cohort within hours. Systematic Error Removal by Random Forest (SERRF) enabled quantifications with less than 5% RSD. We give example results for clinical and pre-clinical studies to showcase the use of the overall WCMC cheminformatics environment.
English

Extending metabolomics by full-scale compound identification

Oliver Fiehn, UC Davis Genome Center
Standardized automated data processing is necessary to learn across multiple studies. Metabolomics has matured enough to be used in large scale biology, forming atlases of metabolomes of organs and species, and comparing metabolomic results across diseases and phenotypes. At the West Coast Metabolomics Center at UC Davis (WCMC), we have built a database environment over the years that not only serves in-house procedures but that enables high-end informatics processes for many clinical and pre-clinical studies. Untargeted metabolomics data at the WCMC are acquired using three assays: primary metabolites by GC-TOF MS (and GC-QTOF MS), biogenic amines by BEH-HILIC-accurate mass MS/MS, and complex lipids by CSH-accurate mass MS/MS. Data are locally converted to the unified mzXML format and pushed to the Amazon cloud service via a local S3 gateway. A suite of tools and databases support further data analysis. Tools use Java, Python and R programming environments, depending on data loads and other performance characteristics. The WCMC has published 19 different tools and databases to be used by the metabolomics community. We show how the data processing software MS-DIAL is used for both LC-MS/MS and GC-MS data from low- and high resolution mass spectrometer for instruments from all vendors. Over the past 15 years, the WCMC has amassed GC-TOF MS data for over 2,500 studies in more than 150,000 samples. These data can now be compared through the open access portal http://binvestigate.fiehnlab.ucdavis.edu to match both known and unknown peaks in GC-MS profiles directly through MS-DIAL software [1]. BinVestigate includes more than 7,500 unique peaks that detail abundance and frequence of both identified and unknown metabolites in more than 80 species and organs. We show how unknown compounds can be classified, annotated and structurally identified by using H/D exchange, NIST-MS hybrid search, MS-FINDER and MassFrontier spectra interpretation, and retention time prediction. Workflows have been adapted for both GC-MS and accurate mass LC-MS/MS data. Retention time, MS and MS/MS spectra were matched by the enlarged MassBank of North America database as well as NIST17 to include more than 600,000 mass spectra, including 50,000 new experimental spectra on authentic natural products. We have now re-written the MS-DIAL data processing software for cloud computing as part of an automated enterprise-grade MS data processing pipeline. Internal standards are used to align for retention time drifts, enabling comparison of both identified and unknown metabolites across different matrices. Using 50 AWS cloud compute nodes we analyzed 4,099 clinical plasma samples of a type 2 diabetes human cohort within hours. Systematic Error Removal by Random Forest (SERRF) enabled quantifications with less than 5% RSD. We give example results for clinical and pre-clinical studies to showcase the use of the overall WCMC cheminformatics environment.
English

GC-MS as tool to study tumor metabolism

Katja Dettmer-Wilde, University of Regensburg
Distinct changes in metabolism are a well-recognized hallmark of cancer. Targeted and untargeted metabolomics can help to identify these metabolic alterations during tumor development and progression. Due to the complexity of the metabolome, different analytical platforms are used to study changes in metabolism. Gas chromatography hyphenated to mass spectrometry is a well-established tool in metabolomics. However, the analysis of changes in metabolite concentrations provides only a snapshot of the metabolic state of a cell or an organism. Stable isotope labeling experiments, using for example13C or 15N as tracer isotopes, can deliver insights into the utilization of substrates and metabolic pathways. These experiments are used for metabolic flux and stable isotope tracer analysis. With the latter approach, the metabolic fate of a compound or the metabolic pathway activity is assessed by interpreting the labeling pattern in downstream metabolites rather than calculating fluxes. An essential requirement in stable isotope tracer experiments is the correction for natural stable isotope abundance and tracer purity before data interpretation. In this context, IsoCorrectoR a tool to correct for natural isotope abundance and tracer impurity in MS-, MS/MS-data will be presented [1]. Selected examples for stable isotope tracing will be discussed for example in cells with a double knockout of Lactate dehydrogenase A and B [2], and a B-cell line with an inducible Myc-construct [3]. Using 13C1 labeled glutamine, we could show that glutamine also feeds into reductive carboxylation upon stimulation of the model B-cell line [3].
English

GC-MS as tool to study tumor metabolism

Katja Dettmer-Wilde, University of Regensburg
Distinct changes in metabolism are a well-recognized hallmark of cancer. Targeted and untargeted metabolomics can help to identify these metabolic alterations during tumor development and progression. Due to the complexity of the metabolome, different analytical platforms are used to study changes in metabolism. Gas chromatography hyphenated to mass spectrometry is a well-established tool in metabolomics. However, the analysis of changes in metabolite concentrations provides only a snapshot of the metabolic state of a cell or an organism. Stable isotope labeling experiments, using for example13C or 15N as tracer isotopes, can deliver insights into the utilization of substrates and metabolic pathways. These experiments are used for metabolic flux and stable isotope tracer analysis. With the latter approach, the metabolic fate of a compound or the metabolic pathway activity is assessed by interpreting the labeling pattern in downstream metabolites rather than calculating fluxes. An essential requirement in stable isotope tracer experiments is the correction for natural stable isotope abundance and tracer purity before data interpretation. In this context, IsoCorrectoR a tool to correct for natural isotope abundance and tracer impurity in MS-, MS/MS-data will be presented [1]. Selected examples for stable isotope tracing will be discussed for example in cells with a double knockout of Lactate dehydrogenase A and B [2], and a B-cell line with an inducible Myc-construct [3]. Using 13C1 labeled glutamine, we could show that glutamine also feeds into reductive carboxylation upon stimulation of the model B-cell line [3].
English

GC-MS as tool to study tumor metabolism

Katja Dettmer-Wilde, University of Regensburg
Distinct changes in metabolism are a well-recognized hallmark of cancer. Targeted and untargeted metabolomics can help to identify these metabolic alterations during tumor development and progression. Due to the complexity of the metabolome, different analytical platforms are used to study changes in metabolism. Gas chromatography hyphenated to mass spectrometry is a well-established tool in metabolomics. However, the analysis of changes in metabolite concentrations provides only a snapshot of the metabolic state of a cell or an organism. Stable isotope labeling experiments, using for example13C or 15N as tracer isotopes, can deliver insights into the utilization of substrates and metabolic pathways. These experiments are used for metabolic flux and stable isotope tracer analysis. With the latter approach, the metabolic fate of a compound or the metabolic pathway activity is assessed by interpreting the labeling pattern in downstream metabolites rather than calculating fluxes. An essential requirement in stable isotope tracer experiments is the correction for natural stable isotope abundance and tracer purity before data interpretation. In this context, IsoCorrectoR a tool to correct for natural isotope abundance and tracer impurity in MS-, MS/MS-data will be presented [1]. Selected examples for stable isotope tracing will be discussed for example in cells with a double knockout of Lactate dehydrogenase A and B [2], and a B-cell line with an inducible Myc-construct [3]. Using 13C1 labeled glutamine, we could show that glutamine also feeds into reductive carboxylation upon stimulation of the model B-cell line [3].
English

Hyper-fast flow-field thermal gradient GC: Latest developments, applications and perspectives

Peter Boeker, University of Bonn
Flow-field thermal gradient GC (FF-TG-GC) is a new type of gas chromatography. It is a very fast and yet high-resolution measurement method at the same time. Measurement cycles can be performed from below 60 seconds including the cool down phase. An additional advantage of the method is the significant reduction of the elution temperatures. Therefore the range of GC-amendable substances is extended. A recent new development is an active cooling of the support structure of the resistively heated column. Now a temperature program can be started at 20°C. Very volatile substances are focused on the column at this temperature, e.g. vinyl chloride. Despite this low temperature, the cooling phase starting from 350°C lasts only a few seconds. The hyper-fast GC is compatible with all the sample introduction methods: SPME, headspace and liquid injection. Especially with SPME also very fast measurements are possible with short absorption and desorption phases. The main field of FF-TG-GC will be in the high-throughput field with large sample numbers in short times. But the request for fast results is also fulfilled, e.g. in security applications.
English

Hyper-fast flow-field thermal gradient GC: Latest developments, applications and perspectives

Peter Boeker, University of Bonn
Flow-field thermal gradient GC (FF-TG-GC) is a new type of gas chromatography. It is a very fast and yet high-resolution measurement method at the same time. Measurement cycles can be performed from below 60 seconds including the cool down phase. An additional advantage of the method is the significant reduction of the elution temperatures. Therefore the range of GC-amendable substances is extended. A recent new development is an active cooling of the support structure of the resistively heated column. Now a temperature program can be started at 20°C. Very volatile substances are focused on the column at this temperature, e.g. vinyl chloride. Despite this low temperature, the cooling phase starting from 350°C lasts only a few seconds. The hyper-fast GC is compatible with all the sample introduction methods: SPME, headspace and liquid injection. Especially with SPME also very fast measurements are possible with short absorption and desorption phases. The main field of FF-TG-GC will be in the high-throughput field with large sample numbers in short times. But the request for fast results is also fulfilled, e.g. in security applications.
English

Hyper-fast flow-field thermal gradient GC: Latest developments, applications and perspectives

Peter Boeker, University of Bonn
Flow-field thermal gradient GC (FF-TG-GC) is a new type of gas chromatography. It is a very fast and yet high-resolution measurement method at the same time. Measurement cycles can be performed from below 60 seconds including the cool down phase. An additional advantage of the method is the significant reduction of the elution temperatures. Therefore the range of GC-amendable substances is extended. A recent new development is an active cooling of the support structure of the resistively heated column. Now a temperature program can be started at 20°C. Very volatile substances are focused on the column at this temperature, e.g. vinyl chloride. Despite this low temperature, the cooling phase starting from 350°C lasts only a few seconds. The hyper-fast GC is compatible with all the sample introduction methods: SPME, headspace and liquid injection. Especially with SPME also very fast measurements are possible with short absorption and desorption phases. The main field of FF-TG-GC will be in the high-throughput field with large sample numbers in short times. But the request for fast results is also fulfilled, e.g. in security applications.
English

Pushing the peak capacity boundaries in GCxGC

Tadeusz Górecki, University of Waterloo
Comprehensive two-dimensional chromatographic separations are one of the most exciting recent developments in separation science. Comprehensive two-dimensional gas chromatography (GC×GC) debuted in 1991 [1]. The technique is based on repeated collection of small fractions of the effluent from the first dimension column and their re-injection into the second dimension column for additional separation. The interface between the two columns that makes a GC×GC separation possible is called a modulator. Over the years, GC×GC has evolved from an academic curiosity to a widely used method thanks to many instrumental and software advances, and the numerous demonstrations of its separation power. Many different modulator designs have been developed over the years, each one with its own set of strengths and weaknesses. Since the columns used in the two dimensions of GC×GC are never fully orthogonal because analyte volatility always plays a significant role, the separation in the second dimension (²D) of GC×GC can be carried out under practically isothermal conditions. However, this also means that peaks eluting from the ²D column broaden quite rapidly. In addition, analytes retained strongly enough to not elute in a given modulation period show up in the consecutive periods, a phenomenon called wraparound. The peak capacity in GC×GC is in a first approximation the product of the peak capacities of the two dimensions. Two strategies can be used to maximize twodimensional peak capacity: reduction of the injection band width leading to overall narrower peaks, and temperature programming of the ²D to maintain uniform peak width throughout the entire modulation period (while also minimizing the wraparound phenomenon). The talk will present our contributions to this area, including new modulator designs and pioneering developments in ²D temperature programming.
English

Pushing the peak capacity boundaries in GCxGC

Tadeusz Górecki, University of Waterloo
Comprehensive two-dimensional chromatographic separations are one of the most exciting recent developments in separation science. Comprehensive two-dimensional gas chromatography (GC×GC) debuted in 1991 [1]. The technique is based on repeated collection of small fractions of the effluent from the first dimension column and their re-injection into the second dimension column for additional separation. The interface between the two columns that makes a GC×GC separation possible is called a modulator. Over the years, GC×GC has evolved from an academic curiosity to a widely used method thanks to many instrumental and software advances, and the numerous demonstrations of its separation power. Many different modulator designs have been developed over the years, each one with its own set of strengths and weaknesses. Since the columns used in the two dimensions of GC×GC are never fully orthogonal because analyte volatility always plays a significant role, the separation in the second dimension (²D) of GC×GC can be carried out under practically isothermal conditions. However, this also means that peaks eluting from the ²D column broaden quite rapidly. In addition, analytes retained strongly enough to not elute in a given modulation period show up in the consecutive periods, a phenomenon called wraparound. The peak capacity in GC×GC is in a first approximation the product of the peak capacities of the two dimensions. Two strategies can be used to maximize twodimensional peak capacity: reduction of the injection band width leading to overall narrower peaks, and temperature programming of the ²D to maintain uniform peak width throughout the entire modulation period (while also minimizing the wraparound phenomenon). The talk will present our contributions to this area, including new modulator designs and pioneering developments in ²D temperature programming.
English

Pushing the peak capacity boundaries in GCxGC

Tadeusz Górecki, University of Waterloo
Comprehensive two-dimensional chromatographic separations are one of the most exciting recent developments in separation science. Comprehensive two-dimensional gas chromatography (GC×GC) debuted in 1991 [1]. The technique is based on repeated collection of small fractions of the effluent from the first dimension column and their re-injection into the second dimension column for additional separation. The interface between the two columns that makes a GC×GC separation possible is called a modulator. Over the years, GC×GC has evolved from an academic curiosity to a widely used method thanks to many instrumental and software advances, and the numerous demonstrations of its separation power. Many different modulator designs have been developed over the years, each one with its own set of strengths and weaknesses. Since the columns used in the two dimensions of GC×GC are never fully orthogonal because analyte volatility always plays a significant role, the separation in the second dimension (²D) of GC×GC can be carried out under practically isothermal conditions. However, this also means that peaks eluting from the ²D column broaden quite rapidly. In addition, analytes retained strongly enough to not elute in a given modulation period show up in the consecutive periods, a phenomenon called wraparound. The peak capacity in GC×GC is in a first approximation the product of the peak capacities of the two dimensions. Two strategies can be used to maximize twodimensional peak capacity: reduction of the injection band width leading to overall narrower peaks, and temperature programming of the ²D to maintain uniform peak width throughout the entire modulation period (while also minimizing the wraparound phenomenon). The talk will present our contributions to this area, including new modulator designs and pioneering developments in ²D temperature programming.