Evaluation_Clinical Observations Interop Project

A few thoughts on evaluation for discussion next week....
 
Evaluation:
I was asked to think of evaluation ideas for the Clinical Obs Interop Project. Some areas that I think might be worth considering are:
 
Process: did we follow software development best-practices (team members could elaborate) such as defining use case, specifying requirements, development procedures, etc.?
 
Requirements: The representational requirements for eligibility criteria are multi-faceted and complex. I will give an overview in my talk at the meeting. Also, Vipul's requirements spreadsheet identifies all the eligibility concepts in our 10 sampled protocols, and might be a guide for an evaluation checklist. Other system requirements such as Dan's will undoubtedly emerge as we begin focused discussion and project strategy.
 
Functionality: Does our application do what we want it to do (per the use case)? I.e., does it successfully identify patients with data that match (or fall into ranges) with (selected) inclusion criteria who also do not have any (or selected) exclusion criteria? If yes, then our demo would be a success. We could also go further and have an clinical trials "expert" review EMR records from a set of patients, identify a set of eligible patients, and use that set as a "gold standard" for retrieval measures (recall/precision) or sensitivity/specificity calculations on the set that our application actually retrieves.
 
These are the only evaluation ideas I have for now, but as new ideas emerge from the group, I will keep them organized so that we can include evaluation metrics in our project activities.
 
Thanks,
Rachel
 
 
 
Rachel Richesson, PhD, MPH
Assistant Professor
Pediatrics Epidemiology Center
USF College of Medicine, Department of Pediatrics
3650 Spectrum Blvd., Suite 100
Tampa, FL 33612
Office: (813) 396-9522
Fax: (813) 396-9601
Email: richesrl@epi.usf.edu

Received on Thursday, 1 November 2007 16:30:15 UTC