- From: Matt Williams <matthew.williams@cancer.org.uk>
- Date: Tue, 12 Jun 2007 19:59:45 +0100
- To: Daniel Rubin <rubin@med.stanford.edu>, Semantic Web <semantic-web@w3.org>
Dear Daniel, Thanks for your comments. > > Actually, sometimes the interpretation *is* part of the evidence--best > example is medical imaging wherein the radiologist interpretation of the > images are part of the primary evidence (the image is the "raw" > evidence, but you have no result without the radiology interpretation of > the image). Well, you still have the X-ray, and most emergency medicine is done on X-rays that are not interpreted by radiologists. I agree that the interpretation is part of the evidence. > Interpretation also transforms raw data into recoded variables that is > also used as evidence, for example in interpreting raw EKG tracings to > give the label of "ventricular tachycardia" or recording a sodium of 150 > as "high sodium." One solution is to make a distinction between data and evidence; the former are "traces", the latter are inferences on the data. The problem with this is that a) it's at odds with common usage and b) it tends to fall apart again with reported data, which are subjective, and hence interpreted already (did the patient wince when I dug in their RIF?). >> To take the radiology example below >> >>>> So evidence is a function of the facts, the >>>> analysis method, the method of inference, and perhaps even the >>>> observer (e.g., if the evidence is a radiology image or physical >>>> exam, there is inter-observer variation). >>>> And it's definitely necessary to relate the hypotheses to the >>>> evidence with probabilities >>>> >> I would suggest that the interpretation of the evidence is a function >> of the facts (plus other things). However, the facts are not stable >> (e.g. with a physical examination) and may conflict with each other; >> therefore inconsistency is not a just a matter of which inference >> procedure you choose, it is also a matter of which facts (your >> premises) you start from. >> >> It is also not "definitely necessary to relate hypotheses to evidence >> with probability" (although it may be useful). There are a load of >> other techniques that don't use probability: e.g. Wigmore Charts (from >> 1930's onwards) and more recently, non-monotonic logical techniques. > > I suggested the importance of probabilities because of their utility in > the biomedical domain. Have the other methods you cite been used in > biomedicine? If so, I'd be very interested in looking at the citations. I would suggest you look at OpenClinical for a (very broad) over view (http://www.openclinical.org/home.html ). For some more specific citations, look at http://www.acl.icnet.uk/lab/papers.html#2006. You might want to look at the Bioinformatics 2006 and BJC 2006 papers for a start. On a more general note, much of the proceedings of conferences such as AIME are logic based; and my (limited) knowledge of guideline formalisation suggests that most of this is done in (various flavours of) logic. Work on Asbru, GLIF, etc. If you want to see a very small comparison of the two approaches, I have just given a talk on integating Bayes Nets & Argumentation, which builds on some earlier work I did with someone at Univ. of Kent. Link here: http://www.evidencescience.org/diary/detail.asp?ID=398 I'm aware that there is also a long history of Probabilistic work as well (de Dombal is the one I know best). The point I'm trying to make is that there is a serious history of non-probabilistic work as well. HTH, Matt >> For a good intro. I would recommend David's Schum's book "The >> Evidential Foundations of Probabilistic Reasoning". Also,a look at the >> evidence science website might be good: http://www.evidencescience.org/ > > Thanks for the pointers. > Daniel > > >> HTH, >> >> Matt > > -- http://acl.icnet.uk/~mw http://adhominem.blogsome.com/ +44 (0)7834 899570
Received on Tuesday, 12 June 2007 19:00:01 UTC