- From: Owen Ambur <Owen.Ambur@verizon.net>
- Date: Wed, 26 Feb 2020 12:09:22 -0500
- To: Adam Sobieski <adamsobieski@hotmail.com>, "public-aikr@w3.org" <public-aikr@w3.org>
- Message-ID: <0a249733-5190-008f-6256-9d19575211f8@verizon.net>
Based upon my interchange with Adam, the about statements of the following organizations are now available in StratML format: AAAI - https://stratml.us/carmel/iso/AAAIwStyle.xml This page, updated in 2007, references a strategic planning committee: https://www.aaai.org/Organization/name-change.php However, I don't see an actual strategic plan on their site. Synthesia - https://stratml.us/carmel/iso/SNTHwStyle.xml SmartBody - https://stratml.us/carmel/iso/SBwStyle.xml NOAA's AI strategy is now also available in StratML format: https://stratml.us/drybridge/index.htm#NOAAAIS It would be good if NOAA were to begin applying the good practice set forth in section 10 <https://www.linkedin.com/pulse/open-machine-readable-government-owen-ambur/> of the GPRA Modernization Act (GPRAMA). Failure to do so is an instance of publicly funded artificial ignorance <https://www.linkedin.com/pulse/artificial-ignorance-owen-ambur/>. With respect to fact-checking, I am more interest in what is done than what is said. (After all is said and done <https://www.brainyquote.com/quotes/aesop_118961> ...) Understanding the results of what is done requires reliable performance reports -- preferably in open, standard, machine-readable format so that heroic effort is not required to understand who is doing what to whom. Talk is cheap... and thus not worth undue attention. Much less than it presently attracts. What matters are results... which are receiving far too little attention, in part because they are presently too hard to see, much less evaluate ... until it is too late... which serves the special interests of the powers-that-be. Owen On 2/26/2020 2:34 AM, Adam Sobieski wrote: > > Owen, > > A longer-term initiative might involve appealing to AAAI for them to > recommend including more discussion of justification, provenance and > explanation in AI curricula. Knowledge representation and reasoning > courses can equip students with the skills needed to model > justification, provenance and explanation and to add these features, > as desired, to database and knowledgebase projects. > > On these topics, an interesting futuristic technology topic is > /real-time fact-checking/. > > Journalists, debate moderators, their support staffs, and audience > members can, today, make use of Web search engines to obtain > information pertinent to what politicians and orators are saying. In > the near future, people might be able input spoken language audio into > their computers to obtain, on their screens, real-time fact-checking > and other information relevant to spoken utterances. > > This fact-checking or for-more-information content could appear on > screen seconds after orators speak utterances. With a brief delay in > video feeds, the fact-checking and informational content could appear > on-screen as the utterances occur. There would likely be multiple such > services available to journalists and debate moderators could make use > of them simultaneously and select and assemble multi-source content. > > Fact-checking technologies have more uses than journalism and debate > moderation during election seasons. People could make broad use of > real-time fact-checking and for-more-information services when working > on documents, presentations and slideshows in various business and > organizational contexts. > > Argumentation technologists can envision services which provide, in > real-time, materials which support and materials which argue against > provided spoken language claims. > > With real-time fact-checking technologies and related services, > developers could easily add useful features to their dialogue systems, > Q&A systems and intelligent tutoring systems. Developers could access > real-time fact-checking, for-more-information, and argumentation-based > services to overlay provided content atop their hypervideo streams. > Multiple services could be accessed when dialogue systems form their > utterances and multiple services could be used to composite content > into what end-users interact with as a single WebRTC hypervideo stream. > > Best regards, > > Adam > > *From: *Owen Ambur <mailto:Owen.Ambur@verizon.net> > *Sent: *Tuesday, February 25, 2020 11:13 PM > *To: *Adam Sobieski <mailto:adamsobieski@hotmail.com>; > public-aikr@w3.org <mailto:public-aikr@w3.org> > *Subject: *Re: Synthetic Media Community Group > > Adam, conceptually speaking, I believe we are violent agreement. > > Provenance is also a key concept at the heart of records management. > https://www.archives.gov/research/alic/reference/archives-resources/principles-of-arrangement.html > > What I look forward to learning is what, practically speaking, we > might be able do about it by working collaboratively together -- > particularly since basic document/records management principles seem > to be too boring for the W3C ... and it's hard to learn to run if we > do not first learn to crawl and walk. > > To coin a Yogi Berraism > <https://ftw.usatoday.com/2019/03/the-50-greatest-yogi-berra-quotes>: > You won't get there if you don't go there. > > Owen > > On 2/25/2020 7:49 PM, Adam Sobieski wrote: > > Owen, > > Also on the topics of the veracity and reliability of content is, > perhaps more relevant to AIKR, epistemology and the means by which > to ensure the accuracy of information relayed by digital > characters, e.g. intelligent tutoring systems and question > answering systems. > > Brainstorming: referenced materials could appear atop synthesized > video content, in a lower third manner, as digital characters > answer users’ questions. This approach could be of use for both > video and hypervideo scenarios. > > Best regards, > > Adam > > https://en.wikipedia.org/wiki/Lower_third > > https://en.wikipedia.org/wiki/Hypervideo > > *From: *Adam Sobieski <mailto:adamsobieski@hotmail.com> > *Sent: *Tuesday, February 25, 2020 4:47 PM > *To: *Owen Ambur <mailto:Owen.Ambur@verizon.net>; > public-aikr@w3.org <mailto:public-aikr@w3.org> > *Subject: *RE: Synthetic Media Community Group > > Owen, > > For those desiring more information about synthetic media, here > are some hyperlinks: > > 1. http://smartbody.ict.usc.edu/ > 2. https://www.synthesia.io/ > 3. https://medium.com/@vriparbelli/our-vision-for-the-future-of-synthetic-media-8791059e8f3a > > I enjoyed the concept of democratized creativity mentioned in the > article indicated by the third hyperlink. I am hoping that new > standards can facilitate the artificial-intelligence-enhanced > creation, modification, interchange, use and reuse of media > including, but not limited to: imagery, video, audio, speech, > text, documents, scripts and screenplays, dialogue, and 3D > graphics. I have some ideas with respect to advancing computer > animation (like BML) and articulatory synthesis (like SSML). > > Digital characters are a component of intelligent tutoring systems > which have interested me for some time. > > The output formats points you broached seem applicable to MPEG, > WebM and WebRTC. > > Best regards, > > Adam > > *From: *Owen Ambur <mailto:Owen.Ambur@verizon.net> > *Sent: *Tuesday, February 25, 2020 1:34 PM > *To: *Adam Sobieski <mailto:adamsobieski@hotmail.com>; > public-aikr@w3.org <mailto:public-aikr@w3.org> > *Subject: *Re: Synthetic Media Community Group > > Adam, I'll be interested to see how your interest cloud-based > automated planning relates to the StratML standard. As your plan > comes together, I look forward to rendering it in StratML format > for inclusion in our collection at > https://stratml.us/drybridge/index.htm#W3C > > In the meantime, I encourage you to include StratML (ISO 17469-1) > in your list of related standards. > > Based upon the televideo conference that Carl Mattocks arranged > today, several of us will be using Chris Fox's StratNavApp to > collaboratively draft a plan addressing knowledge representation > for AI. https://www.stratnavapp.com/ > > Chris' app outputs plans and reports in StratML format. > > Rendering content in open, schema-compliant, machine-readable > format will greatly facilitate synthesis. > > In the CredWeb CG, I am also arguing that it will support > evaluation of veracity and reliability. Why should we pay any > particular attention to content which ignores applicable standards? > > Owen > > On 2/25/2020 12:06 AM, Adam Sobieski wrote: > > Artificial Intelligence Knowledge Representation Community Group, > > > Introduction > > A new W3C Community Group is launching: the /Synthetic Media > Community Group/. I would like to invite you to support the > creation of this group and to join this new group: > https://www.w3.org/community/groups/proposed/#synthetic-media . > > Our group is interested in every component of the modern > architectures of artificial intelligence with which to > generate or synthesize media and interactive media content. > > We intend to advance the state of the art with a number of new > standards. With new and improved standards, teams and > organizations will be better able to explore and develop the > interoperable components which comprise these modern > architectures of artificial intelligence. > > We hope that you can join us in the advancement of artificial > intelligence standards and technologies. > > > Synthetic Media > > Synthetic media is any content created or modified > algorithmically. > > Applications of synthetic media include: education, > journalism, and entertainment. > > > Digital Characters > > Our group is interested in digital characters, behavior > modeling, and in the numerous other topics of computer > animation such as: lighting, cameras, timelines, tracks, and > scene graphs. > > > Dialogue Systems > > Our group is interested in the topics of multimodal > mixed-initiative dialogue systems including: speech > recognition, natural language understanding, natural language > generation and speech synthesis. > > > Modeling and Simulation > > Our group is interested in modeling and computer simulation > topics. > > > Computational Narratology, Digital Screenplays and > Interactive Narrative > > Computer-generated stories, scripts and screenplays can > facilitate the complex and interactive behaviors of one or > more digital characters including interactions between digital > characters and on-screen imagery, presentation slides, videos, > 3D graphics, virtual sets and video walls. > > Our group is interested in the generation and real-time > generation of stories, scripts, screenplays, storyboards and > cinematography. > > > Automated Planning and Scheduling > > Automated planning is a traditional approach to generating > media content. Our group is interested in cloud-based > automated planning and scheduling. > > > Deep Learning > > Deep learning is a recent approach to generating media > content. Our group is interested in cloud-based deep learning. > > > Content Evaluation and Computational Aesthetics > > Our group is interested in the automatic evaluation of plans, > stories, scripts, screenplays, storyboards, cinematography, > visual imagery, audio, dialogues, natural language, > explanations, presentations and other media content. > > Computational aesthetics and evaluation technologies can be of > use in guiding search, when comparing alternatives, when > making design decisions, and when training neural systems. > > > Existing Standards and Technologies > > Existing standards and technologies relevant to our group > include, but are not limited to: FML, BML, MURML, SSML, SMIL, > WebRTC, HTML, RDF, and JavaScript. > > > Conclusion > > Thank you. We hope that you can join us in the advancement of > artificial intelligence standards and technologies. > > Best regards, > > Adam Sobieski >
Received on Wednesday, 26 February 2020 17:09:42 UTC