W3C home > Mailing lists > Public > public-semweb-lifesci@w3.org > March 2007

Re: Spatial queries against GENSAT or ABA

From: William Bug <William.Bug@drexelmed.edu>
Date: Sun, 4 Mar 2007 23:46:12 -0500
Message-Id: <7892D4E6-6F0C-4D1A-ACCF-98684D18C786@drexelmed.edu>
Cc: "'Maryann Martone'" <maryann@ncmir.ucsd.edu>, "'kc28'" <kei.cheung@yale.edu>, "'Alan Ruttenberg'" <alanruttenberg@gmail.com>, "'June Kinoshita'" <junekino@media.mit.edu>, "'Donald Doherty'" <donald.doherty@brainstage.com>, "'Gwen Wong'" <wonglabow@verizon.net>, "'W3C HCLSIG hcls'" <public-semweb-lifesci@w3.org>, "'Robert W. Williams'" <rwilliam@nb.utmem.edu>, <zaslavsk@sdsc.edu>
To: "Nigam Shah" <nigam@stanford.edu>
Very sensible question, Nigam.

Unfortunately, the nature of this data - both the atlas and each of  
these 20,000 GenePainted brains is 2D.  The GenePainted brains -  
despite having been

There are statistical techniques used to make assertions about 3D  
anatomical objects from 2D-based data (Stereology).  I'm not certain  
they are doing that with the ABA - YET.

What they are doing is the following (a massive simplification of the  
overall process):

3-D-ifying the 2D F&P mouse brain atlas
	1) They create a 3D grid for the atlas coordinate space where each  
grid quadrant is ~4.6 micron on edge (100 microns^3).  Each of the  
F&P atlas image plates (both coronal & sagittal) are placed in this  
grid.
	2) The 2D ROIs segmented on each F&P plate were reassembled into 3D  
volumes (with minimal smoothing)

Registering GenePainted brains - and quantifying staining patterns  
for each
	1) Registration algorithms were used to match the 20,000 GenePainted  
brains (as Alan pointed out - ~4000 cut coronally [front-to-back]  
with - I assume - the remainder cut sagittally [left-to-right]) - to  
an accuracy of 100 - 300 microns (it's possible they have both  
coronal & sagittal brains for some genes - can't pick that quickly  
out of the methods)
	2) Using the calculated registration parameters, the atlas grid is  
warped and projected on to each GenePainted section
	3) They then threshold each quadrant based on the ISH normalized  
intensity and pseudo color the results to create what they call a  
"heat map" of expression on that slide.
	4) For each gene, the quadrant intensities are reflected back on the  
atlas brain regions.  This is done using the 3D re-assembled brain  
region geometries and each quadrant is represented as a sphere.  The  
radius o each sphere is proportional to the "heat map" intensity of a  
quadrant, so that larger sphere's in theory represent >> transcript  
for that gene.
	5) These spheres can then be used to both localize and quantify the  
amount of transcript for a given gene found across the entire  
collection of 3D atlas brain region geometries.

There are a lot of details left out here.  For instance, the 448  
sections (~11.2mm centered front-back) and 160 sections (~4mm  
centered left-right) pulled from each coronally- & sagittally- 
sectioned brain, respectively, are split across 8 alternating  
series.  Each series consists of sections from the same brain that  
are 200 microns apart (8 x 25 micron section thickness).  Each of the  
8 series is stained separately.  Series 4 & 8 are stained for Nissl  
bodies to give a view of overall cytoarchitecture at 100 micron  
intervals.  This means though they are sampling at a relatively high  
spatial frequency relatively to most brain regions in the mouse brain  
(25 microns), they have a blip every 100 microns where the gene probe  
staining jumps 50 microns.  In other words, the gene stained series  
is non-isotropic in the cutting plane

In other words, they are not really dealing with fully 3D data sets  
but 2D data sets upon which expedient and very logical approximations  
have been made in order to provide some quantification of the gene  
expression patterns relative to brain regions as defined in a mouse  
brain atlas (F&P) commonly used in the research community.  The  
nature of these data sets doesn't necessarily lend themselves to  
treatment in the Google Earth API.  I'm also not certain how much of  
the data generated in the GenePaint scenario 1 - 5 above is publicly  
available.  Either way, I would guess, it would be tough to represent  
this info in a 3D GIS system, unless you were able to work pretty  
closely with the folks ABA.

Cheers,
Bill


On Mar 4, 2007, at 7:02 PM, Nigam Shah wrote:

> I did pass some emails around to the SMART Atlas folks early last  
> week in order to get their feedback on Alan's work on the Google  
> Maps Javascript API and backend PERL code to support caching  
> images. The Google Maps API is one that has come up endless in  
> these atlasing discussions, and it's nice to see just how it can be  
> made useful - what it can and cannot do in this application space.
>
> Might have been asked before but why Google Maps API and not Google  
> Earth API (which is 3D). There are websites that already allow  
> tracking of flights in 3D using google earth API.
>
> -Nigam.

Bill Bug
Senior Research Analyst/Ontological Engineer

Laboratory for Bioimaging  & Anatomical Informatics
www.neuroterrain.org
Department of Neurobiology & Anatomy
Drexel University College of Medicine
2900 Queen Lane
Philadelphia, PA    19129
215 991 8430 (ph)
610 457 0443 (mobile)
215 843 9367 (fax)


Please Note: I now have a new email - William.Bug@DrexelMed.edu
Received on Monday, 5 March 2007 04:46:30 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:52:30 UTC