Go Back   Science Forums Biology Forum Molecular Biology Forum Physics Chemistry Forum > Molecular Research Topics Forum > Cell Biology and Cell Culture
Register Search Today's Posts Mark Forums Read

Cell Biology and Cell Culture Cell Biology Forum. Cell Culture Forum. Post and ask questions about cell culturing, cell lysis, cell transfection, cell growth, and cell biology.


Human Cytome Project - How to explore - Update 19 April 2005

Human Cytome Project - How to explore - Update 19 April 2005 - Cell Biology and Cell Culture

Human Cytome Project - How to explore - Update 19 April 2005 - Cell Biology Forum. Cell Culture Forum. Post and ask questions about cell culturing, cell lysis, cell transfection, cell growth, and cell biology.


Reply
 
LinkBack Thread Tools Display Modes
  #1  
Old 04-19-2005, 07:10 AM
Peter Van Osta
Guest
 
Posts: n/a
Default Human Cytome Project - How to explore - Update 19 April 2005



Hi,

As the on-line version of my article on the Human Cytome Project and the
application of cytomics in medicine and drug discovery (pharmaceutical
research) evolves, I put the updated version in this newsgroup for
reference. The original "question" on a Human Cytome Project was posted in
this newsgroup on Monday 1 December 2003.

On-line version (split version):
A Human Cytome Project - an idea
[Only registered users see links. ]

Human Cytome Project and Drug Discovery
[Only registered users see links. ]

Human Cytome Project - How to Explore
[Only registered users see links. ]

A framework for cytome exploration
[Only registered users see links. ]

================================================== ==========================

Human Cytome Project - How to Explore

By Peter Van Osta
How to explore and find new directions for research

We may now be capable to study a low-level layer of biological integration
in great detail, such as the genome or proteome, but it is in the
higher-order spatial and temporal patterns of cellular (and beyond)
dynamics where the answers to our questions can be found. However, these
higher-order levels of biological integration are still being studied is a
dispersed way, due to the formidable technological and scientific
challenges we are facing.

The physiome is the quantitative description of the functioning organism
in normal and pathophysiological states (Bassingthwaighte JB, 2000). A
living organism and its cells are high-dimensional systems, both
structural and functional. A 4-D physical space (XYZ, time) is still a
formidable challenge to deal with compared to the 1-D problem of a
DNA-sequence. The even higher-order feature hyperspace which is derived
from this 4-D space is even further away from what we can easily
comprehend. We focus the major efforts of our applied research on the
level of technology we can achieve, not on the level of spatial and
temporal understanding which is required. Applied research is suffering
from a scale and dimensionality deficit in relation to the physical
reality it should deal with. Reality does not simplify itself to adapt to
the technology we use to explore biology just to please us.

At the moment we expect that an oligo- or even mono-parametric
low-dimensional analysis will allow us to draw conclusions with sufficient
predictive power to work all the way up to the disease processes in an
entire organism. We are using disease models with a predictive deficit,
which allow us to gather data at great speed and quantity, but in the end
the translation of the results into efficient treatment of diseases fails
in the majority of cases (up to 90 percent). The cost of this inefficient
process is becoming a burden, which both society and the pharmaceutical
industry will not be able to support indefinitely. As "the proof is in the
pudding", not in its ingredients, we have to improve the productivity of
biomedical and pharmaceutical research and broaden our functional
understanding of disease processes in order to prepare ourselves for the
challenges facing medicine and society.

If there were no consequences on the speed of exploration in relation to
the challenges medicine is facing today, the situation would of course be
entirely different. In many cases, the formulation of an appropriate
hypothesis is very difficult and the resulting cycle of formulating a
hypothesis and verifying it is a slow and tedious process. In order to
speed up the exploration of the cytome, a more open and less deterministic
approach will be needed (Kell DB, 2004).

Analytical tools need to be developed which can find the needle in the
haystack, without a priori knowledge or in other words we should be able
to find the black cat in a dark room, without knowing or assuming that
there is a black cat. An open and multi-parametric exploration of the
cytome should complement the more traditional hypothesis driven scientific
approach, so we can combine speed with in-depth exploration in a
two-leveled approach to cytomics. The multi-parametric reality which we
need to deal with requires a more multi-factorial exploration than the way
we explore the cellular level at this moment.

We now close our eyes to much of the complexity we observe; because our
disease models are not up to the challenge we are facing today. Feeling
happy with answers to questions in low-complexity disease models will not
help us at the end of the drug discovery pipeline. We reduce the
complexity of our datasets beyond the limits of predictive power and
meaningfulness. We must reduce the complexity of possible conclusions
(improvement or deterioration), but not the quality of data representation
or data extraction into our mathematical models. The value of a disease
model does not lie in the technological complexity of the machinery we use
to study it, but in its realistic representation of the disease process we
want to mimic.

A disease model which fails to generate data and conclusions which hold
into drug development, years later, fails to fulfill its mission.
Disease-models are not meant tot predict future behavior of the model, but
to predict the outcome of a disease and a treatment. The residual gap
between the model and the disease is in many cases too big to allow for
valid conclusions out of experiments with current low-level disease
models. Due to deficient early-stage disease models, the attrition rate in
pharmaceutical research is still very high (80 percent or 4 out of 5 drugs
in clinical research).

Every physical or biological system we try to explore shows some
background variation which we cannot capture into our models. We tend to
call this unaccounted variation background noise and try to eliminate it,
by randomization of experiments or simply close our eyes for it. The less
variation we are capable to capture into our models, the more vulnerable
we are for losing subtle correlations between events. It is our inability
to model complex space-time dynamics which makes us stick to simplified
models which suffer from a correlation deficit in relation to reality.
Biological reality does not simplify itself just to please us, but we must
adapt ourselves to the dynamics of biological reality in order to increase
the correlation with in vivo processes.

It is often said that the easy targets to treat are found already, but in
relation to the status of scientific knowledge and understanding,
“targets” were never easy to find. Disease models were just inadequate
to lead to an in-depth understanding of the actual dynamics of the disease
process. Just remember the concept of “miasma” before the work of
Louis Pasteur and Robert Koch on infectious diseases. Only when looking
back with present day knowledge we declare historical research as
“easy”, but we tend to forget that those scientists were fighting an
uphill battle in their days.

Instead of focusing on ever further simplifying our low-dimensional and
oligo-parametric disease models in order to speed them up and only
increasing the complexity of the machinery to study them, we need a
paradigm shift to tackle the challenge ahead of us. Increasing quantity
with unmatched quality of correlation to clinical reality leads to
correlation and predictive deficits. We have to create a quantitative
hyperspace derived from high-order spatial and temporal observations
(manifold) to study the dynamics of disease processes in cells and
organisms. The parameterization of the observed physical process has to
represent the high-dimensional (5D, XYZ, time, spectrum) and multi-scale
reality underlying the disease process. Each physical or feature space can
be given a coordinate system (Cartesian, polar, gauge …) which puts
individual objects and processes into a relative relation to each other
for further quantitative exploration. Homo siliconensis

Gathering more and better quality information about cellular processes,
will hopefully allow us to improve disease models up to a point where
improved in-silico models will help us to complement in-vivo and in-vitro
disease models (Sandblad B, 1992; Bassingthwaighte JB., 1995;
Bassingthwaighte JB, 2000; Higgins G, 2001; Loew LM, 2001; Slepchenko BM,
2003; Takahashi, K., 2003; Berends M, 2004; De Schutter E, 2004).

The physiome is the quantitative description of the functioning organism
in normal and pathophysiological states (Bassingthwaighte JB, 2000).
Gradually building the “Homo (sapiens) siliconensis” or in-silico man
will allow us to study and validate our disease models at different levels
of biological organization. Building a rough epi-cellular model, based on
our knowledge of physiology and gradually increasing the spatial and
temporal functional resolution of the model by increasing its
“cellularity” could allow for improving our knowledge and
understanding on the way to a full-fledged in-silico model of man.
(Infra-) Cellular resolution is not needed in all cases, so the model
should allow for dynamic up- and down-scaling its “granularity” of
structural and functional resolution in both space and time. Global,
low-density models could be supplemented by a patchwork of highly defined
cellular models and gradually merge into a unified multi-scale dynamic
model of the spatial and temporal organization of the human cytome. What
to do and the way to go?
The goal of a Human Cytome Project

The functional and structural characterization (spatial, temporal) of the
processes and structures leading to the phenotypical expression of the
(human) cytome in a quantitative way is in my opinion the ultimate goal of
an endeavor on the scale of a Human Cytome Project (HCP). We should reach
a point where we are able to understand (complex) disease processes and
design disease models which are capable to capture the multifactorial
complexity of the in-vivo in-organism dynamics of (a) disease processes
with high predictive power and correlation.

This knowledge should be made broadly available for the improvement of
diagnostics, disease treatments and drug discovery. It is the prerequisite
to come to a better understanding of disease processes and to develop and
improve treatments for new, complex and life threatening diseases for
which we do not find an answer with our current genome and proteome
oriented approach only. Studying the Cytome

First try to walk and then run. Studying the (human) cytome as such is
basically another way of looking at research on cellular systems. We go
from a higher level of biological organization (cytome) to a lower one
(proteome and cytome). Any research which starts from the molecular single
cell phenotypes in combination with exhaustive bioinformatics knowledge
extraction, is cytomics (Valet G, 2003). The only thing you need is
something like a flow-cytometer or a (digital) microscope to extract the
appropriate datasets to start with. Even a small lab or group can take
this approach and prove the concept, either for diagnostics, drug
discovery or basic research. Generating cytome-oriented data and getting
results is within reach of almost every scientist and lab. Increasing the
throughput may be required for industrial research and for a large scale
project, but this is not necessary for a proof of concept or for studying
a specific subtopic. Organizational aspects

To study the entire human cytome will require a broad multidisciplinary a
multinational approach, which will involve scientists from several
countries and various disciplines to work on problems from a functional
and phenotypical point of view and top-down, instead of bottom-up. Both
academia and industry will have to work together to avoid wasting too much
time on scattered efforts and dispersed data. The organizational
complexity of a large multi-center project will require a dynamic
management structure in which society (politicians), funding agencies,
academia and the industry participate in organizing and synchronizing the
international effort. Managing and organizing such an endeavor is a
daunting task and will require excellent managerial skills from those
involved in the process, besides their scientific expertise (Collins F.S.,
2003b).

The challenges of Human Cytome Project will not allow us to concentrate on
only a few techniques or systematically describing individual components,
but we must keep a broad overview on the cell and its function and
phenotype by multi-modal exploration. We will need an open systems design
in order to be able to exchange data and analyze them with a wide variety
of exploratory and analytical tools in order to allow for creating a broad
knowledgebase and proceed with the exploration of the cytome without
wasting too much time on scattered data.

The project should be designed in such a way that along the road
intermediate results would already provide beneficial results to medicine
and drug development. Intermediate results could be derived from hotspots
found during the process and worked out in more detail by groups
specializing in certain areas. As such the project could consist of a
large scale screening effort in combination with specific topics of
immediate interest. The functional exploration of pathways involved in
pathological processes, would allow us to proceed faster towards an
understanding of the process involved in a disease. It is best to take a
dual approach for the project, which on one side focuses on certain
important diseases (cancer, AD …), and on the other side a track which
focuses on cellular mechanisms such as cell cycle, replication, cell type
differentiation (stem cells). The elucidation of these cellular
mechanisms, will lead to the identification of hot-spots for further
research in disease process and allow for the development of new
therapeutic approaches. A strategy for exploration

A device can sample a physical space (XYZ) at a certain inner and outer
resolution, which translates in a cellular system into a structure space,
such as the nucleus, Golgi, mitochondria, lysozomes, membranes, etc. We
can sample a time axis also within a certain inner and outer resolution,
which in a cellular system translates in life cycle stages such as cell
division, apoptosis and cell death. The spectral axis is used to
discriminate between spatially and temporally coinciding objects. It is
used by means of artificially attached labels which allow us to use
spectral differentiation to identify cellular structures and their
dynamics. It expands the differentiating power of the probing system.

We use a combination of space, time and spectrum to capture and
differentiate structures and processes in and around cells. The cytome is
built from all different cells and cell-types of a multi-cellular organism
so we multiplex our exploration over multiple cells and cell types, such
as hepatocytes, fibroblasts, etc.

In the cells we insert structural and functional watchdogs (reporters) on
different life-cycle time points and into different organelles and around
cells (Deuschle K, 2005). At the moment we already have a multitude of
reporters available to monitor structural and functional changes in cells
(fluorescent probes ...). This inserts a sampling grid or web into cells
which will report structural and functional changes which we can use as
signposts for further exploration. We turn cells into 4D arrays or grids
for multiplexing our observations of the spatial and temporal changes of
cellular metabolism and pathways. It is like using a 4D “spiderweb” to
capture cellular events. Instead of extracting the 4D matrix of cellular
structure and dynamics into 2D microarrays (DNA, protein …), we insert
probe complexes into the in-vivo intracellular space-time web. We create
an intracellular and in-vivo micro-matrix or micro-grid.

Structural and functional changes in cells will cause a space-time
"ripple" in the structural and functional steady state of the cell and if
one of the reporters is in proximity of the status change it will provide
us with a starting point for further exploration. A living cell is not a
static structure, buth an oscillating complex of both structural and
functional events. The watchdogs are the bait to capture changes and act
as signposts from which to spread out our cytome exploration. We could see
them as the starting point (seeds) of a shotgun approach or the threads of
a spiderweb for cytome exploration.

The spatial and temporal density and sensitivity of our reporters and
their structural and functional distribution throughout the cell will
define our ability to capture small changes in the web of metabolic
processes in cells. At least we capture changes in-vivo, closely aligned
with the space-time dynamics of the living cell. We should try to align
the watchdogs with hot-spots of cellular structure and function. The
density and distribution of watchdogs is a dynamic system, which can be
in-homogeneously expanded or collapsed depending on the focus of research.

RNA interference (RNAi) can silence gene expression and can be used to
inhibit the function of any chosen target gene (Banan M, 2004; Campbell
TN, 2004; Fire A., 1998; Fraser A. 2004; Mello CC, 2004; Mocellin, 2004).
Large scale RNAi creening is now within reach (Sachse C, 2005). This
technique can be used to study the effect of in-vivo gene silencing on the
expressed phenotype (watchdog monitoring) in a transient way.

Stem cells can be made to differentiate into different cell types and the
differentiation process montired for spatial and temporal changes and
irregularities. By using stem cells we can mimic (and avoid) taking
multiple biopsies at different life stages of an individual and its cells.
The resulting cell types can be used for multiplexing functional and
structural research of intracellular processes. RNAi can be applied to
stem cell research to study stem cell function (Zou GM, 2005). Technology

Human biology can be explored at multiple levels and scales of biological
organization, by using many different techniques, such as CT, MRI, LM, EM,
etc. each providing us with a structural and functional subset of the
physical phenomena going on inside the human body. I will focus on the
cellular level.

The necessity to explore the cellular level poses some demands on the
spatial, spectral and temporal inner and outer resolution which has to be
met by the technology used to extract content from the cell. However there
is no one-on-one overlap between the biological structure and activity at
the level of the cytome and our technological means to explore this level.
Life does not remodel its physical properties to adapt to our exploratory
capabilities. The alignment of the scale and dimensions of cellular
physics with our technological means to explore is still far from perfect.
The discontinuities and imperfections in our exploratory capacity are a
cause of the fragmentation of our knowledge and understanding of the
structure and dynamics of the cytome and its cells. Our knowledge is
aligned with our technology, not with the underlying biology.

Image based cytometry

Every scientific challenge leads to the improvement of existing
technologies and the development of new technologies (Tsien R, 2003).
Technology to explore the cytome is already available today and exciting
developments in image and flow based cytometry are going on at the moment.
The dynamics of living cells is now being studied in great detail by using
fluorescent imaging microscopy techniques and many sophisticated light
microscopy techniques are now available (Giuliano KA, 1998; Tsien RY,
1998; Rustom A, 2000; Emptage NJ., 2001; Haraguchi T. 2002; Gerlich D,
2003b; Iborra F, 2003; Michalet, X., 2002; Michalet, X., 2003; Stephens
DJ, 2003; Zimmermann T, 2003). Studying intra-vital processes is possible
by using microscopy (Lawler C, 2003). Quantitative microscopy requires a
clear understanding of the basic principles of digital microscopy and
sampling to start with, which goes beyond the principles of the Nyquist
sampling theorem (Young IT., 1988).

Advanced microscopy techniques are available to study the morphological
and temporal events in cells, such as confocal and laser scanning
microscopy (LSM), digital microscopy, spectral imaging, Fluorescence
Lifetime Imaging Microscopy (FLIM), Fluorescence Resonance Energy Transfer
(FRET) and Fluorescence Recovery After Photobleaching (FRAP) (Cole, N. B.
1996; Truong K, 2001, Larijani B, 2003; Vermeer JE, 2004). Spectral
imaging microscopy and FRET analysis are applied to cytomics (Haraguchi T,
2002; Ecker RC, 2004). Fluorescent speckle microscopy (FSM) is used to
study the cytoskeleton in living cells (Waterman-Storer CM, 2002; Adams
MC, 2003; Danuser G, 2003).

Laser scanning (LSM) and wide-field microscopes (WFM) allow for studying
molecular localisation and dynamics in cells and tissues (Andrews PD,
2002). Confocal and multiphoton microscopy allow for the exploration of
cells in 3D (Peti-Peterdi J, 2003). Multiphoton microscopy allows for
studying the dynamics of spatial, spectral and temporal phenomena in live
cells with reduced photo toxicity (Williams RM, 1994; Piston DW, 1999;
Piston DW. 1999b; White JG, 2001).

Green fluorescent protein (GFP) expression is being used to monitor gene
expression and protein localization in living organisms (Shimomura O,
1962; Chalfie M, 1994; Stearns T. 1995; Lippincott-Schwartz J, 2001; Dundr
M, 2002; Paris S, 2004). Using GFP in combination with time-resolved
microscopy allows studying the dynamic interactions of sub-cellular
structures in living cells (Goud B., 1992; Rustom A, 2000). Labelling of
bio-molecules by quantum dots now allows for a new approach to multicolour
optical coding for biological assays and studying the intracellular
dynamics of metabolic processes (Chan WC, 1998; Han M, 2001; Michalet, X.,
2001; Chan WC, 2002; Watson A, 2003; Alivisatos, AP, 2004; Zorov DB,
2004).

The resolving power of optical microscopy beyond the diffraction barrier
is a new and interesting development, which will lead into so-called
super-resolving fluorescence microscopy (Iketaki Y, 2003). New microscopy
techniques such as standing wave microscopy, 4Pi confocal microscopy, I5M
and structured illumination are breaking the diffraction barrier and allow
for improving the resolving power of optical microscopy (Gustafsson MG.,
1999; Egner, A., 2004). We are now heading towards fluorescence nanoscopy,
which will improve spatial resolution far below 150 nm in the focal plane
and 500 nm along the optical axis (Hell SW., 2003; Hell SW, 2004).

Exploring ion flux in cells, such as for Calcium, is already available for
a long time (Tsien R, 1981, Tsien R 1990; Cornelissen, F, 1993). Locating
the spatial and temporal distribution of Ca2+ signals within the cytosol
and organelles is possible by using GFP (Miyawaki A, 1997). Fluorescence
ratio imaging is being used to study the dynamics of intracellular Ca2+
and pH (Bright GR, 1989; Silver RB., 1998; Fan GY, 1999; Silver RB., 2003;
Bers DM., 2003).

Microscopy is being used to study Mitochondrial Membrane Potentials (MMP)
and the spatial and temporal dynamics of mitochondria (Zhang H, 2001; Pham
NA, 2004). The distribution of H+ ions across membrane-bound organelles
can be studied by using pH-sensitive GFP (Llopis J, 1998)

Electron Microscopy allows studying cells almost down to the atomic level.
Atomic Force Microscopy (AFM) allows studying the structure of molecules
(Alexander, S., 1989; Drake B, 1989; Hoh, J.H., 1992; McNally HA, 2004).
Multiple techniques can be used, such as combining AFM for imaging living
cells and compare this with Scanning Electron Microscopy (SEM) and
Transmission Electron Microscopy (TEM) (Braet F, 2001).

High Content Screening (HCS) is available for high speed and large volume
screening of protein function in intact cells and tissues (Van Osta P.,
2000; Van Osta P., 2000b; Liebel U, 2003; Conrad C, 2004; Abraham VC,
2004; Van Osta P., 2004). New research methods are bridging the gap
between neuroradiology and neurohistology, such as magnetic resonance
imaging (MRI), positron emission tomography (PET), near-infrared optical
imaging, scintigraphy, and autoradiography (Heckl S, 2004).

Flow cytometry

Flow Cytometry allows us to study the dynamics of cellular processes in
great detail (Perfetto SP, 2004; Voskova D, 2003; Roederer M, 2004).
Interesting developments are leading to fast imaging in flow (George TC,
2004). Combining both image and flow based cytometry can shed new light on
cellular processes (Bassoe C.F., 2003).

Image analysis

In order to come to come to a quantitative understanding of the dynamics
of in-vivo cellular processes image processing, methods for object
detection, motion estimation and quantisation are required. The first step
in this process is the extraction of image components related to
biological meaningful entities, e.g. nuclei, organelles etc., Secondly
quantitative features are applied to the selected objects, such as area,
volume, intensity, texture parameters, etc. Finally a classification is
done, based on separating, clustering, etc. of these quantitative
features.

New image analysis and quantification techniques are constantly developed
and will enable us to analyze the images generated by the imaging systems
(Van Osta P, 2002; Eils R, 2003; Nattkemper TW, 2004; Wurflinger T, 2004).
The quantification of high-dimensional datasets is a prerequisite to
improve our understanding of cellular dynamics (Gerlich D, 2003; Roux P,
2004; Gerster AO, 2004). Managing tools for classifiers and feature
selection methods for elimination of non-informative features are being
developed to manage the information content and size of large datasets
(Leray, P., 1999; Jain, A.K., 2000; Murphy, R.F., 2002; Chen, X., 2003;
Huang, K., 2003; Valet GK, 2004)

Imaging principles based on physics and human vision principles allow for
the development of new and interesting algorithms (Geusebroek J.M., 2001;
Geusebroek J. M., 2003; Geusebroek J. M., 2003b; Geusebroek J. M., 2005).
The necessary increase of computing power requires both a solution at the
level of computation as increasing the processing capacity (Seinstra F.J.,
2002; Carrington WA, 2004). Improving the automated quantification of
image content allows for a better understanding of microscopy images
(Huang K., 2004).

The development of new and improved algorithms will allow us to extract
quantitative data to create the high-dimensional feature spaces for
further analysis. A framework for streamlining exploration

This section will be expanded in a separate document

My personal interest is to build a framework in which acquisition,
detection and quantification are designed as modules each using plug-ins
to do the actual work (Van Osta P, 2004) and which in the end can manage a
truly massive exploration of the human cytome.

The basic principle of a digital imaging system is to create a digital
in-silico representation of the spatial, temporal and spectral physical
process which is being studied. In order to achieve this we try to let
down a sampling grid on the biological specimen. The physical layout of
this sampling grid in reality is never a precise isomorphic cubical
sampling pattern. The temporal and spectral sampling inner and outer
resolution is determined by the physical characteristics of the sample and
the interaction with the detection technology being used.

The extracted objects are sent to a quantification module which attaches
an array of quantitative descriptors (shape, density …) to each object.
Objects belonging to the same biological entity are tagged to allow for a
linked exploration of the feature space created for each individual object
(Van Osta P., 2000; Van Osta P., 2002b, Van Osta P., 2004). The resulting
data arrays can be fed into analytical tools appropriate for analysing a
high dimensional linked feature space or feature hyperspace. Data analysis
and data management

Managing and analyzing large datasets in a multidimensional linked feature
space or hyperspace will require a change in the way we look at data
analysis and data handling. Analyzing a multidimensional feature space is
computationally very demanding compared to a qualitative exploration of a
3D image. We often try to reduce the complexity of our datasets before we
feed them into analytical engines, but sometimes this is a “reductio ad
absurdum”, below the level of meaningfulness. We have to create tools to
be able to understand high-dimensional feature “manifolds” if we want
to capture the wealth of data cell based research can provide.
Transforming a high-dimensional physical space into an even higher order
feature space requires an advanced approach to data analysis.

The conclusion of an experiment may be summarized in two words, either
“disease” or “healthy”, but the underlying high-dimensional
feature space requires a high-dimensional multiparametric analysis. Data
reduction should only occur at the level of the conclusion, not at the
level of the quantitative representation of the process. The alignment of
a feature manifold with the multi-scale and multidimensional biological
process would allow us to capture enough information to increase the
correlation of our analysis with the space-time continuum of a biological
process.

Building the multidimensional matrix of the web of cross-relations between
the different levels of biological organization, from the genome, over the
proteome, cytome all the way up to the organism and its environment, while
studying each level in a structural (phenotype) and functional way, will
allow us to understand the mechanisms of pathological processes and find
new treatments and better diagnostics tools. A systematic descriptive
approach without a functional complement is like running around blind and
it takes too long to find out about the overall mechanisms of a
pathological process or to find distant consequences of a minute change in
the pathway matrix.

We should also get serious on a better integration of functional knowledge
gathered at several biological levels, as the scattered data are a problem
in coming to a better understanding of biological processes. The current
data storage models are not capable of dealing with heterogeneous data in
a way which allows for in-depth cross-exploration. Data management systems
will need to broaden their scope in order to deal with a wide variety of
data sources and models. Storage is not the main issues, the use and
exploration of heterogeneous data is the centerpiece of scientific data
management. Data originating from different organizational levels, such as
genomic (DNA sequences), proteomic (protein structure) and cytomic (cell)
data should be linked. Data originating from different modes of
exploration, such as LM, EM, NMR and CT should be made cross-accessible.
Problems to link knowledge originating from different levels of biological
integration is mainly due to a failure of multi scale or multilevel
integration of scientific knowledge, from individual gene to the entire
organism, with appropriate attention to functional processes at each
biological level of integration. Standardization and quality

On the experimental side, standardization of experimental procedures and
quality control is of great importance to be able to compare and link the
results from multiple research-centers. But quality is not only a matter
of experimental procedures, but also of disease model validation and
verifying the congruence of a model with clinical reality.

We need to design procedures for instrument set-up and calibration (Lerner
JM, 2004). We need to define experimental protocols (reagents…) in order
to be able to compare experiments. In addition we need to standardize data
exchange procedures and standards such as CytometryML, Digital Imaging and
Communications in Medicine (DICOM), Open Microscopy Environment (OME XML)
and the Flow Cytometry Standard (FCS) (Murphy RF, 1984; Seamer LC, 1997;
Leif RC, 2003; Swedlow JR, 2003; Horn RJ. 2004; Samei E, 2004). A file
format such as the Image Cytometry Standard (ICS v.1 and 2) provides for a
very flexible way to store and retrieve multi-dimensional image data (Dean
P., 1990).

The methods used for data analysis, data presentation and visualization
need to be standardized also. We need to define quality control (QC)
procedures and standards which can be used by laboratories to test their
procedures. A project on this scale requires a registration and repository
of cell types and cell lines (e.g. ATCC, ECCC). This way of working is
already implemented for clinical diagnosis, by organizations such as the
European Working Group on Clinical Cell Analysis (EWGCCA), which could
help to implement standards and procedures for a Human Cytome Project.
References

References can be found here
Copyright notice and disclaimer

My web pages represent my interests, my opinions and my ideas, not those
of my employer or anyone else. I have created these web pages without any
commercial goal, but solely out of personal and scientific interest. You
may download, display, print and copy, any material at this website, in
unaltered form only, for your personal use or for non-commercial use
within your organization. Should my web pages or portions of my web pages
be used on any Internet or World Wide Web page or informational
presentation, that a link back to my website (and where appropriate back
to the source document) be established. I expect at least a short notice
by email when you copy my web pages, or part of it for your own use. Any
information here is provided in good faith but no warranty can be made for
its accuracy. As this is a work in progress, it is still incomplete and
even inaccurate. Although care has been taken in preparing the information
contained in my web pages, I do not and cannot guarantee the accuracy
thereof. Anyone using the information does so at their own risk and shall
be deemed to indemnify me from any and all injury or damage arising from
such use. To the best of my knowledge, all graphics, text and other
presentations not created by me on my web pages are in the public domain
and freely available from various sources on the Internet or elsewhere
and/or kindly provided by the owner. If you notice something incorrect or
have any questions, send me an email.


Email: pvosta at cs dot com

The author of this webpage is Peter Van Osta, MD.

A first draft was published on Monday, 1 December 2003 in the
bionet.cellbiol newsgroup. I plan to post regular updates of this text to
the bionet.cellbiol newsgroup.

Latest revision on 16 April 2005

Reply With Quote
Reply

Tags
2005 , april , cytome , explore , human , project , update


Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Human Cytome Project - Update 24 Jan. 2005 Peter Van Osta Cell Biology and Cell Culture 1 08-01-2010 02:18 PM
Human Cytome Project - an idea - Update 19 April 2005 Peter Van Osta Cell Biology and Cell Culture 1 06-01-2009 02:17 PM
Human Cytome Project - Update 6 Jan. 2005 Peter Van Osta Cell Biology and Cell Culture 0 01-06-2005 11:18 AM
New Saccharomyces Sequences 08/11/04 SGD Sequences Yeast Forum 0 08-12-2004 12:26 AM


All times are GMT. The time now is 08:38 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 2005 - 2012 Molecular Station | All Rights Reserved
Page generated in 0.29211 seconds with 16 queries