Response part 1, Response part 2
Our comment | Response | RI comments MJD additions |
---|---|---|
> General: Is there a tag that allows to change the language in [03] > the middle of a sentence (such as <html:span>)? If not, > why not? This functionality needs to be provided. | [3] Yes, the <voice> tag. In section 3.1.2 (xml:lang), we will note that the <voice> element can be used to change just the language. | No obvious issue here |
Abstract: 'is part of this set of new markup specifications': Which set? [04] | [4] "this set" refers to "standards to enable access to the Web using spoken interaction" from the previous sentence. If you believe this to be unclear, can you suggest an appropriately compact rewording (since this is text from the one-paragraph abstract)? | No. I suggest "The Voice Browser Working Group has sought to develop standards for markup to enable access to the Web using spoken interaction with voice browsers. The Speech Synthesis Markup Language Specification is one of these standards,..." |
Intro: Please shortly describe the intended uses of SSML here, [06] rather than having the reader wait for Section 4. | [6] Rejected. We had already planned to rearrange sections such that section 2 now contains the Document Form (formerly section 3.1), Conformance (formerly section 4), Integration (formerly 3.5), and Fetching (formerly 3.6) sections straight off. If you believe this to be insufficient, can you propose a specific text change for section 1? | Think you should still have a short paragraph in the beginning of the intro to indicate intended use of SSML, who should use it, and
how.
This will help people:
|
2.1.4: How is format='telephone' spoken? [30] | [30] How it would be spoken is processor-dependent. The <say-as> element only provides information on how to interpret (or normalize) a set of input tokens, not on how it is to be spoken. Also, as you pointed out in point 27, "format='telephone'" is merely an example and not a specified value, at least not at this time. | no comment |
2.1.4: Why are there 'ordinal' and 'cardinal' values for both [31] interpret-as and format? | [31] Both are shown as examples to indicate two possible ways it could be done. Neither is actually a specified way to use the element, as you pointed out in point 27. | no comment |
2.1.4 detail 'strict': 'speak letters with all detail': As opposed [33] to what (e.g. in that specific example)? | [33] In this example, without the detail attribute a processor might leave out the colon or the dash, or it might not distinguish between lower case and capital letters. However, this is not actually a specified way to use the attribute, as you pointed out in point 27. | no comment |
2.1.4, last table: There seem to be some fixed-width aspects in the [34] styling of this table. This should be corrected to allow complete viewing and printing at various overall widths. | [34] Rejected. As you suggested in point 27, we will be removing all of the tables of examples in this section. If and when we re- introduce this table, we will correct any styling errors that remain. | no comment |
2.1.4, 4th para (and several similar in other sections): [35] "The say-as element can only contain text." would be easier to understand; we had to look around to find out whether the current phrasing described an EMPTY element or not. | [35] Accepted with changes. This statement you refer to that is present in all of the element descriptions will be modified to more fully describe the content model for the element, although it may not be worded exactly as you suggest. | ok |
2.1.4. For many languages, there is a need for additional information. [36] For example, in German, ordinal numbers are denoted with a number followed by a period (e.g. '5.'). They are read depending on case and gender of the relevant noun (as well as depending on the use of definite or indefinite article). | [36] Rejected. We have had considerable discussion on this point. There are two parts to our response: (1) It is assumed that the synthesis processor will use all contextual information already at its disposal in order to render the text and markup it is given. For example, any relevant case or gender information that can be determined from text surrounding the <say-as> element is expected to be used. (2) The ways and contexts in which information other than the specific number value can be encoded via human language are many and varied. For example, the way you count in Japanese varies based on the type of object that you are counting. That level of complexity is well outside the intended use of the <say-as> element. It is expected in such cases that either the necessary contextual information is available, in normal surrounding text, as described in part 1 above, or the text is normalized by the application writer (e.g. "2" -> "zweiten"). We welcome any complete, multilingual proposals for consideration for a future version of SSML. | no comment |
2.1.4, 4th row of 2nd table: I've seen some weird phone formats, but [37] nothing quite like this! Maybe a more normal example would NOT pronounce the separators. (Except in the Japanese case, where the spaces are (sometimes) pronounced (as 'no').) | [37] Rejected. As you suggested in point 27, we will be removing these examples altogether. If we should decide to reintroduce them at some point, we would be happy to incorporate a revised or extended example from you. | ok |
2.1.6 The <sub> element may easily clash or be confused with <sub> [48] in HTML (in particular because the specification seems to be designed to allow combinations with other markup vocabularies without using different namespaces). <sub> should be renamed, e.g. to <subst>. | [48] Rejected. We have other elements such as <p> with the same potential conflict. Also, we have not particularly crafted element names to avoid conflicts with other markup vocabularies. We see no direct need to change this element name. | I still think, regardless of the potential for overlapping element names, that it would be more immediately apparent what the meaning of this element was (and therefore more user friendly) if it was called <subst>. |
2.2.1 It should be mentioned that in some cases, it may make sense to have [52] a short piece of e.g. 'fr' text in an 'en' text been spoken by an 'en' text-to-speech converter (the way it's often done by human readers) rather than to throw an error. This is quite different for longer texts, where it's useless to bother an user. | [52] Rejected. This behavior is already permitted at processor discretion for arbitrary-length strings of text. Specific words or short phrases can be handled in a more predictable manner by creating custom pronunciations in an external lexicon. We do not believe this needs additional explanation in the document. | even if this is already allowed at processor discretion, many implementers may forget that this may be a more reasonable behavior, so it should be mentioned. |
2.2.1: We wonder if there's a need for multiple voices (eg. A group of kids) [53] | [53] We have not had significant demand to standardize a value for this, e.g. <voice name="kids">. Individual processors are of course permitted to provide any voices they wish. | no comment |
2.2.1 attribute name: (in the long term,) it may be desirable to use [57] an URI for voices, and to have some well-defined format(s) for the necessary data. | [57] Rejected. This is an interesting suggestion that we will be happy to consider for the next version of SSML (after 1.0). | please consider this for the next version |
[01] For some languages, text-to-speech conversion is more difficult than for others. In particular, Arabic and Hebrew are usually written with none or only a few vowels indicated. Japanese often needs separate indications for pronunciation. It was no clear to us whether such cases were considered, and if they had been considered, what the appropriate solution was. SSML should be clear about how it is expected to handle these cases, and give examples. Potential solutions we came up with: a) require/recommend that text in SSML is written in an easily 'speakable' form (i.e. vowelized for Arabic/Hebrew, or with Kana (phonetic alphabet(s)) for Japanese. (Problem: displaying the text visually would not be satisfactory in this case); b) using <sub>; c) using <phoneme> (Problem: only having IPA available would be too tedious on authors); d) reusing some otherwise defined markup for this purpose (e.g. <ruby> from http://www.w3.org/TR/ruby/ for Japanese); e) creating some additional markup in SSML. | [1] Rejected. We reject the notion that on principle this is more difficult for some languages. For all languages supported by synthesis vendors today this is not a problem. As long as there is a way to write the text, the engine can figure out how to speak it. Given the lack of broad support by vendors for Arabic and Hebrew, we prefer not to include examples for those languages. | I suspect from discussions with WAI on this topic and some research with experts in the field, that the lack of broad support by
vendors for Arabic and Hebrew is actually a function of the fact that (unvowelled) text in these scripts is more difficult to support than other
scripts. Of course, this issue can be circumvented by adding vowels to all text used in SSML - that would probably be feasible for text written
specifically for synthesis, but would not be appropriate for text that is intended to be read visually.
I also worry that considering only languages "supported by synthesis vendors today" is running counter to the idea of ensuring universal access. It's like saying it's ok to design the web for english if the infrastructure only supports english. The i18n group is trying to ensure that we remove obstacles to adoption of technology by people from an ever growing circle of languages and cultures. Agreed with Richard. This is really important, and goes to the core of the I18N activity. There may be a chicken-and-egg problem for Hebrew and Arabic, and the spec should clearly state what is allowed and what not. In addition, there are enough vendors for Japanese, I guess, so Japanese could be used as an example, and Arabic/Hebrew just explained in the text. |
General: Tagging for bidirectional rendering is not needed [02] for text-to-speech conversion. But there is some provision for SSML content to be displayed visually (to cover WAI needs). This will not work without adequate support of bidi needs, with appropriate markup and/or hooks for styling. | [2] Rejected. Special tagging for bidirectional rendering would only be needed if there were not already a means of clearly indicating the language, language changes, and the sequence of languages. In SSML it is always clear when a language shift occurs -- either when xml:lang is used or when the <voice> element is used. In any case, the encoding into text handles this itself. We believe that it is sufficient to require a text/Unicode representation for any language text. Visual or other non-audio rendering from that representation is outside the scope of SSML. | Disagree - see the example at http://www.w3.org/International/questions/qa-bidi-controls.html(in the
Background) - the bidi algorithm alone is not sufficient to produce the correct ordering of text for display in this case.
xml:lang is not sufficient or appropriate to resolve bidi issues because there are many minority languages that use RTL scripts. This is an important issue. |
2.1.5 and 2.1.6: Can you specify a null string for the ph and alias [47] attributes? This may be useful in mixed formats where the pronunciation is given by another means, e.g. with ruby annotation. | [47] Rejected. There is no intention that pronunciations can be given by other means within an SSML document. Any use of SSML in this way is outside the scope of the language. Note that pronunciations can of course be given in an external lexicon; it is conceivable that other annotation formats could be used in such a document. | If SSML will be grafted onto ordinary Japanese text written in, say, XHTML it is certain that at some point ruby text will be
encountered. This is a visual device, but is character-based, involving a repetition of a portion of text in two different scripts - so the base text
and the ruby text would be both read out by the synthesiser. This would not only sound strange, but be very distracting.
What we are asking is for the ability to nullify one of the runs of text. It seems to me that this could happen in a number of ways:Presumably this could be done by removing the annotation or base in ruby text, but being able to nullify
I would like to know what the SSML group thinks is the best approach, and think that you should add some note about expected behaviour in this case. |
2.2.3 What about <break> inside a word (e.g. for long words such as [60] German)? What about <break> in cases where words cannot clearly be identified (no spaces, such as in Chinese, Japanese, Thai). <break> should be allowed in these cases. | [60] Rejected. This is a tokenization issue. Tokens in SSML are delimited both by white space and by SSML elements. You can write a word as two separate words and it will have a break, you can insert an SSML element, or you can use stress marks externally. For Asian languages with characters without spaces to delimit words, if you insert SSML elements it automatically creates a boundary between words. You can use a similar approach for German, e.g. with "Fussbalweltmeisterschaft". If you insert a <break> in the middle it actually splits the word, but that's probably what you wanted: Fussbal<break>weltmeisterschaft. If you wish to insert prosodic controls, that would be handled better via an external lexicon which can provide stress markers, etc. | I'm confused. The reply says rejected, but then goes on to show an example of what we asked for. If a <break> automatically creates a boundary, then just say that it can be used in the middle of a word (or phrase in languages without spaces) and that's what happens. |
Our comment | Response | My comments |
---|---|---|
1.2, bullet 4, para 1: It might be nice to contrast the 45 phonemes [10] in English with some other language. This is just one case that shows that there are many opportunities for more internationally varied examples. Please take any such oppurtunities. | [10] We would welcome a specific text proposal from your group. Any language example is fine with us. | http://pluto.fss.buffalo.edu/classes/psy/jsawusch/psy719/Articulation-2.pdf says Hawai'ian has 11 phonemes. Hawai'ian
is indeed very low in phonemes, but 11 seems too low. http://www.ling.mq.edu.au/units/ling210-901/phonology/210_tutorials/tutorial1.html gives 12
with actual details, and may be correct. http://www.sciam.com/article.cfm?articleID=000396B3-70AD-1E6E-A98A809EC5880105 contains other numbers: 18
for Hawai'ian, and more than 100 for !Kung.
We could say something like Hawaian includes fewer than 15 phonemes. Bernard Comrie's Major Languages of South Asia, The Middle East and Africa lists 29 phonemes for Persian. His book Major Languages of East & South East Asia lists 22 for Tagalog The Atlas of Languages, by Comrie et al, lists 14 phonemes for Hawaian and says that Rotokas, a Papuan language of Bougainville in the North Solomons, is recorded in the Guiness Book of Records as the language with fewest phonemes: 5 vowels and 6 consonants. |
2.1.2, example 1: To make the example more realistic, in the paragraph [23] that uses lang="ja" you should have Japanese text - not an English transcription, which may not use as such on a Japanese text-to-speech processor. In order to make sure the example can be viewed even in situations where there are no Japanese fonts available, and can be understood by everybody, some explanatory text can provide the romanized from. (we can help with Japanese if necessary) | [23] We would be happy to accept your offer to rewrite our example using appropriate Japanese text. | Nihongo-ga wakarimasen. -> 日本語が分かりません。 |
2.1.5, 1st example: Please try to avoid character entities, as it [45] suggests strongly that this is the normal way to input this stuff. (see also issue about utf-8 vs. iso-8859-1) | [45] What would you suggest is the normal way? | Pure character data in utf-8. Perhaps we can help you with this example, if you need. |
2.2.1, 2nd example: You should include some text here. [54] | [54] Accepted. If you provided us with example text in Japanese here we would be more than happy to include it. | tbd |
The following elements also should allow xml:lang: [20] - <prosody> (language change may coincide with prosody change) - <audio> (audio may be used for foreign-language pieces) - <desc> (textual description may be different from audio, e.g. <desc xml:lang='en'>Song in Japanese</desc> - <say-as> (specific construct may be in different language) - <sub> - <phoneme> | [20] Rejected/Question. For all but the <desc> element, this can be accomplished using the <voice> element. For the <desc> element, it's unclear why the description would be in a language different from that in which it is embedded; can you provide a better use case? In the <voice> element description we will point out that one of its common uses is to change the language. In 2.1.2, we will mention that xml:lang is permitted as a convenience on <p> and <s> only because it's common to change the language at those levels. We recommend that other changes in the language be done with the <voice> element. | Not sure why you should need to use the voice element in addition to these. First, seems like a lot of redundant work.
It is also counter to the general usage of xml:lang in XHTML/HTML, XML, etc. Eg. you don't usually use a span element if another element already surrounds the text you want to specify). Allowing xml:lang on other tags also integrates the language information better into the structure of the document. For example, suppose you wanted to style or extract all descriptions in a particular language - this would be much easier if the xml:lang was associated directly with that content. It would also help reduce the likelihood of errors where the voice element becomes separated from the element it is qualifying. Re. " why the description would be in a language different from that in which it is embedded": If the author had embedded, eg, a sound-byte in another language (such as JFK saying "Ich bin ein berliner"), the desc element could be used to transcribe the text for those who cannot or do not want to play the audio. A similar approach could be used for sites that teach language or multilingual dictionaries to provide a fallback in case the audio cannot be played. |
Our comment | Response | My comments |
---|---|---|
Intro: 'The W3C Standard' -> 'This W3C Specification' [05] | [5] Accepted. | thankyou |
1.1, last bullet: add a comma before 'and' to make [09] the sentence more readable | [9] Accepted | thankyou |
1.5: The definition of anyURI in XML Schema is considerably wider [14] than RFC 2396/2732, in that anyURI allows non-ASCII characters. For internationalization, this is very important. The text must be changed to not give the wrong impression. | [14] Accepted. We will amend the text to indicate that only the Schema reference is normative and not the references to RFC2396/2732. | thankyou |
1.5 (and 2.1.2): This (in particular 'following the [15] XML specification') gives the wrong impression of where/how xml:lang is defined. xml:lang is *defined* in the XML spec, and *used* in SSML. Descriptions such as 'a language code is required by RFC 3066' are confusing. What kind of language code? Also, XML may be updated in the future to a new version of RFC 3066, SSML should not restrict itself to RFC 3066 (similar to the recent update from RFC 1766 to RFC 3066). Please check the latest text in the XML errata for this. | [15] Accepted. All that you say is correct. We will revise the text to clarify as you suggest. | thankyou |
2., intro: xml:lang is an attribute, not an element. [16] | [16] Accepted. Thank you. We will correct this. | thankyou |
2.1.1: 'The version number for this specification is 1.0.': please [18] say that this is what has to go into the value of the 'version' attribute. | [18] Accepted. | thankyou |
2.1.2., for the first paragraph, reword: 'To indicate the natural [19] language of an element and its attributes and subelements, SSML uses xml:lang as defined in XML 1.0.' | [19] Accepted with changes. This is related to point 15. We will reword this to correct the problems you mention in that point, but the rewording may vary some from the text you suggest. | thankyou |
2.1.2: 'text normalization' (also in 2.1.6): What does this mean? [21] It needs to be clearly specified/explained, otherwise there may be confusion with things such as NFC (see Character Model). | [21] Accepted. We will add a reference, both here and in section 2.1.6, to section 1.2, step 3, where this is described. | thankyou |
2.1.2, 1st para after 1st example: Editorial. We prefer "In the [24] case that a document requires speech output in a language not supported by the processor, the speech processor largely determines the behavior." | [24] Accepted | thankyou |
2.1.5 This may need a note that not all characters used in IPA are [41] in the IPA block. | [41] Accepted. | thankyou |
2.1.6 For abbreviations,... there are various cases. Please check [49] that all the cases in http://lists.w3.org/Archives/Member/w3c-i18n-ig/2002Mar/0064.html are covered, and that the users of the spec know how to handle them. | [49] Accepted. We will clarify within the text how application authors should handle the cases presented in the referenced email. | thankyou |
2.1.6, 1st para: "the specified text" -> [50] "text in the alias attribute value". | [50] Accepted. | thankyou |
2.2.1 The 'age' attribute should explicitly state that the integer [55] is years, not something else. | [55] Accepted | thankyou |
2.2.1 The variant attribute should say what it's index origin is [56] (e.g. either starting at 0 or at 1) | [56] Accepted. The text and schema will be adjusted to clarify that this attribute can only contain positive integers. | thankyou |
2.2.3 and 2.2.4: "x-high" and "x-low": the 'x-' prefix is part of [61] colloquial English in many parts of the world, but may be difficult to understand for non-native English speakers. Please add an explanation. | [61] Accepted. We will add such an explanation. | thankyou |
2.2.4: Please add a note that customary pitch levels and [62] pitch ranges may differ quite a bit with natural language, and that "high",... may refer to different absolute pitch levels for different languages. Example: Japanese has general much lower pitch range than Chinese. | [62] Accepted. | thankyou |
2.2.4, 'baseline pitch', 'pitch range': Please provide definition/ [63] short explanation. | [63] Accepted. We will add this. | thankyou |
2.2.4 'as a percent' -> 'as a percentage' [64] | [64] Accepted. | thankyou |
2.2.4 What is a 'semitone'? Please provide a short explanation. [65] | [65] Accepted. We will add this. | thankyou |
2.2.4, bullets: Editorial nit. It may help the first time reader to [67] mention that 'relative change' is defined a little further down. | [67] Accepted. | thankyou |
2.3.3 Please provide some example of <desc> [71] | [71] Accepted. We will add an example. | thankyou |
3.3, last paragraph before 'The lexicon element' subtitle: [73] Please also say that the determination of what is a word may be language-specific. | [73] Accepted. We will clarify this. | thankyou |
4.1 'synthesis document fragment' -> 'speech synthesis document fragment' [75] | [75] Accepted. | thankyou |
4.4 'requirement for handling of languages': Maybe better to [77] say 'natural languages', to avoid confusion with markup languages. Clarification is also needed in the following bullet points. | [77] Accepted. We will make this change. | thankyou |
App A: 'http://www.w3c.org/music.wav': W3C's Web site is www.w3.org. [79] But this example should use www.example.org or www.example.com. | [79] Accepted. We will correct this. | thankyou |
App D: Why does this mentions 'recording'? Please remove or explain. [81] | [81] Accepted with changes. This was accidentally left in when originally copied from the VoiceXML specification. It will be corrected. | thankyou |
App G: What is meant by 'input' and 'output' languages? This is the [86] first time this terminology is used. Please remove or clarify. | [86] Accepted. This is old text. We will clarify. | thankyou |
[88] The appendices should be ordered so that the normative ones appear before the informative ones. | [88] Accepted. | thankyou |
Section 1, para 2: Please shortly describe how SSML and Sable are [07] related or different. | [7] Accepted. We will describe the relationship. | thankyou |
1.1 and 1.5: Having a 'vocabulary' table in 1.1 and then a [13] terminology section is somewhat confusing. Make 1.1 e.g. more text-only, with a reference to 1.5, and have all terms listed in 1.5. | [13] Accepted. We agree that this is confusing. We will make section 1.1 more text-only and cross-reference as necessary. We will also remove "Vocabulary" from the title of section 1.1. | thankyou |
2.1.1, para 1: Given the importance of knowing the language for [17] speech synthesis, the xml:lang should be mandatory on the root speak element. If not, there should be a strong injunction to use it. | [17] Accepted. xml:lang will now be mandatory on the root <speak> element. | thankyou very much |
2.1.2, 2nd para after 1st example: "There may be variation..." [25] Is the 'may' a keyword as in rfc2119? Ie. Are you allowing conformant processors to vary in the implementation of xml:lang? If yes, what variations exactly would be allowed? | [25] Yes, the "may" is a keyword as in rfc2119, and conformant processors are permitted to vary in their implementation of xml:lang in SSML. Although processors are required to implement the standard xml:lang behavior defined by XML 1.0, in SSML the attribute also implies a change in voice which may or may not be observed by the processor. We will clarify this in the specification. | thankyou |
2.1.3: 'A paragraph element represents the paragraph structure' [26] -> 'A paragraph element represents a paragraph'. (same for sentence) Please decide to either use <p> or <paragraph>, but not both (and same for sentence). | [26] Accepted. We accept the editorial change. We will remove the <paragraph> and <sentence> elements. | thankyou |
2.1.4: <say-as>: For interoperability, defining attributes [27] and giving (convincingly useful) values for these attributes but saying that these will be specified in a separate document is very dangerous. Either remove all the details (and then maybe also the <say-as> element itself), or say that the values given here are defined here, but that future versions of this spec or separate specs may extend the list of values. [Please note that this is only about the attribute values, not the actual behavior, which is highly language-dependent and probably does not need to be specified in every detail.] | [27] Accepted. As you suggest, we will remove the examples from this section in order to reduce confusion. | no comment |
2.1.4, 'locale': change to 'language'. [29] | [29] Accepted. | thankyou |
2.1.4 'The detail attribute can be used for all say-as content types.' [32] What's a content type in this context? | [32] This wording was accidentally left over from an earlier draft. We will correct it. | thankyou |
2.1.5, <phoneme>: [38] It is unclear to what extent this element is designed for strictly phonemic and phonetic notations, or also (potentially) for notations that are more phonetic-oriented than usual writing (e.g. Japanese kana-only, Arabic/Hebrew with full vowels,...) and where the boundaries are to other elements such as <say-as> and <sub>. This needs to be clarified. | [38] Accepted. We will clarify in the text that this element is designed for strictly phonemic and phonetic notations and that the example uses Unicode to represent IPA. We will also clarify that the phonemic/phonetic string does not undergo text normalization and is not treated as a token for lookup in the lexicon, while values in <say-as> and <sub> may undergo both. | thankyou very much |
2.1.5 IPA is used both for phonetic and phonemic notations. Please [40] clarify which one is to be used. | [40] Accepted. IPA is an alphabet of phonetic symbols. The only representation in IPA is phonetic, although it is common to select specific phones as representative examples of phonemic classes. Also, IPA is only one possible alphabet that can be used in this element. The <phoneme> element will accept both phonetic and phonemic alphabets, and both phonetic and phonemic string values for the ph attribute. We will clarify this and add or reference a description of the difference between phonemic and phonetic. | thankyou |
2.2.4, Please state whether units such as 'Hz' are case-sensitive [70] or case-insensitive. They should be case-sensitive, because units in general are (e.g. mHz (milliHz) vs. MHz (MegaHz)). | [70] Accepted. Although the units are already marked as case- sensitive in the Schema, we will clarify in the text that such units are case-sensitive. | thankyou |
4.5 This should say that a user agent has to support at least [78] one natural language. | [78] Accepted. We will add this. | thankyou |
App F, last paragraph: 'Unfortunately, ... no standard for designating [84] regions...': This should be worded differently. RFC 3066 provides for the registration of arbitrary extensions, so that e.g. en-gb-accent-scottish and en-gb-accent-welsh could be registered. | [84] Accepted. We will revise the text appropriately. | thankyou |
These are comments that arise out of a reread of the specification during evaluation of comments.
Our comment | Response | My comments |
---|---|---|
This is an important topic that has been discussed with other groups since we did the review.
There are a number of elements that allow only PCDATA content and attributes containing text to be spoken (eg. the alias attribute of the <sub> element, and the <desc> element). Use of PCDATA precludes the possibility of language change or bidi markup for a part of the text. Proposed changes:
[Note: we have recently discussed this with the HTML WG wrt XHTML2.0 and they have agreed to take similar action as we are recommending here.] |
please provide | n/a |