Re: Task proposal: Distributed self-publishing of experiments

AJ

There isn't such a tool to my knowledge.  However you might like to  
look at what we're doing with SWAN, which encompasses exactly what  
you propose.  We have focused on Alzheimer Research for pragmatic  
reasons and because we believe in working with the domain scientists  
close to us. We also believe in deploying immediately useful CONTENT,  
which we will do, and in getting traction on the ground with  
individual researchers, which we are working towards.  So that is how  
we are proceeding.

But in our opinion all the concepts are generalizable.  And if people  
in the HCLS group would like to work with us to generalize them,  
we'll do it -- which is why we proposed the Knowledge Lifecycle task  
group.  Anyone wanting to work on this stuff together, please feel  
free to contact me directly.

Best

Tim Clark




On May 9, 2006, at 8:07 PM, AJ Chen wrote:

> I appreciate all the comments. Let me first make myself clear so  
> that I won't get beaten up again! I'm trying to solve a specific  
> problem or unmet need here. When I don't see a satisfactory  
> solution, I make a proposal. If the feedback says there is a good  
> solution existed already, then my job is done. If only bits and  
> pieces of a potential good solution are out there, I'll refine the  
> proposal to re-use the existing components.  Like all of you, I  
> don't have time to reinvent wheels.
>
> So, what's the problem I'm trying to solve? One simple way to put  
> it: There is no search engine where one can search at the level of  
> single experiment and its components. I mean any experiment cross  
> all research fields.  A few concrete questions one may ask this  
> search engine: What hypotheses people have for this gene? What  
> experiments have been done on this protein? What tools/reagents/ 
> instrument/protocols have been used in characterize the toxicity of  
> this compound? What conclusions have been drawn about this new  
> phenomenon?
> The solution to this problem in my mind requires researchers to  
> publish their studies at the level of single experiment in a format  
> (like RDF) that a computer can understand the different parts of  
> the experiment. It also requires search engines to aggregate all  
> these RDF data and provide search by any part of the experiment.  
> The third requirement is that the search engine is not limited to a  
> specific domain. I'm aware that a few search engines for domain- 
> specific experiments have already existed or are being developed,  
> and more will come. These are all important.  But I also see a need  
> for search engines that can search for any experiment cross all  
> research areas, enabling data sharing and integration across the  
> board. Such broad-based search engine lacks the specificity of  
> domain-specific engines, but it can be used by researchers in all  
> fields and thus has the potential advantage of scale.
>
> Another way to look at why a general solution is useful is to ask  
> this question: Is there any tool that we can provide to the  
> research community that can let everyone benefit from the semantic  
> web technology today?  The answer must be a general-purpose tool,  
> not domain-specific one like search engine for microarray  
> experiments.  In the end, users will be best served with both  
> general purpose and domain-specific tools.
>
> If anyone knows any ontology that is designed for publishing  
> scientific projects and experiments across all disciplines, please  
> let me know. I have been looking for them.
> Thanks,
>
> AJ
>
> On 5/9/06, Matthias Samwald <samwald@gmx.at > wrote:
>
>
> >Deliverables:
> >Ontology for publishing projects and experiments. There are
> >some domain-specific ontologies, such as microarray experiment
> >ontology, already existed today.This task is intended to develop
> >a general purpose ontology for describing projects and
> >experiments in such a way that search and comparison of
> >components of experiments is possible.
>
> I don't think that it is necessary to develop a new ontology for  
> the task you have proposed. It would be sufficient and already  
> quite impressive to develop a system that harvests and aggregates  
> existing ontologies AND the ontologies that are developed in the  
> other Tasks. I think having souch a system would be of great  
> benefit to the other tasks, because it would demonstrate one of the  
> main advantages of the RDF standards. It would probably suffice to  
> have a main portal that aggregates RDF from a fixed set of websites  
> and allows to explore the aggregated RDF with something like OINK [1].
>
> On a sidenote, I would suggest that any RDF that is put online  
> during the project should be submitted to Swoogle for faster indexing:
> http://swoogle.umbc.edu/index.php? 
> option=com_swoogle_service&service=submit
>
> The Swoogle web-interface is not something that could be used for a  
> demonstration of RDF to scientists, though. At the time, it is  
> mainly useful for Semantic Web developers.
>
> kind regards,
> Matthias Samwald
>
>
>
> [1] http://www.lassila.org/blog/archive/2006/03/oink.html
>
>
>
>
>
>

Received on Wednesday, 10 May 2006 18:28:20 UTC