[MMSEM] intersting project automatic multimodal annotation

Greetings all, i have just been pointed at

http://www.alphaworks.ibm.com/tech/marvel?open&S_TACT=105AGX59&S_CMP=GR&ca=dgr-eclpsw03awmarvel 

which i didnt know and has an interesting online demo

 From there one fins LSCOM, an "expanded multimedia concept lexicon on 
the order of 1000. Concepts related to events, objects, locations, 
people, and programs have been selected following a multi-step process 
involving input solicitation, expert critiquing, comparison with related 
ontologies, and performance evaluation."

http://www.ee.columbia.edu/dvmm/lscom/

Peeking inside the "ontology" one finds approximately 850 concepts that 
have been extrapolated and a list of annotations with such terms for 
specific videos segments (provided as a training set for classifiers, i 
thinki)

Terms might be as generic as "male" "statue" "resturant" but they get 
suspiciously specific at times , with terms such as "Saddam Hussein" 
"Steel Mill worker" "Tennis" "Abused Woman" "Abused Child" (but no 
"Abused_man" for example)


Giovanni

Received on Friday, 1 September 2006 11:12:10 UTC