Fwd: MediaStream, ArrayBuffer, Blob audio result from speak() for recording?

---------- Forwarded message ----------
From: guest271314 <guest271314@gmail.com>
Date: Thu, Jul 6, 2017 at 5:27 PM
Subject: Re: MediaStream, ArrayBuffer, Blob audio result from speak() for
recording?
To: Glen Shires <gshires@google.com>


Relevant bug for Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=
1377816. Feature request for Chromium https://bugs.chromium.org/p/chromium/
issues/detail?id=733051#c3. Workaround so far at github https://github.com/
guest271314/SpeechSynthesisRecorder. It took a while to determine that
Monitor of Built-in Audio was necessary instead of Built-in Audio at
.getUserMedia() prompt.

Beginning here within this email, three widely applicable and appropriate
use cases which are at the forefront are

1) Persons who have issues speaking; i.e.g., persons whom have suffered a
stroke or other communication inhibiting afflictions. They could convert
text to an audio file and send the file to another individual or group.
This feature would go towards helping them communicate with other persons,
similar to the technologies which assist Stephen Hawking communicate;

2) Presently, the only person who can hear the audio output is the person
in front of the browser; in essence, not utilizing the full potential of
the text to speech functionality. The audio result can be used as an
attachment within an email; media stream; chat system; or other
communication application. That is, control over the generated audio output;

3) Another application would be to provide a free, libre, open source audio
dictionary and translation service - client to client and client to server,
server to client.

Those are the main three use cases. There are others that can fathom;
though the above should be adequate to cover a wide range of users of the
implementation.

If, in your or your organizations' view, those use cases are not compelling
or detailed enough, please advise and will compose a more thorough analysis
and proposal.

The current workaround is cumbersome.  Why should we have to use
navigator.mediaDevices.getUserMedia() and MediaRecorder to get the audio
output? It is not as if the workaround is impossible to achieve, though why
are we needing to use two additional methods to get audio as a static file?

At a minimum we should be able to get a Blob or ArrayBuffer of the
generated audio. The Blob or ArrayBuffer could, generally, be converted to
other formats, if necessary. For example meSpeak.js already provides the
described functionality http://plnkr.co/edit/ZShBbiFGEKIJX2WgErkl?p=preview
.

Regards,
/guest271314


On Wed, Jul 5, 2017 at 9:56 AM, Glen Shires <gshires@google.com> wrote:

> If I understand correctly, you have a solution for one browser, but not
> with a second browser.  I suggest you post your question on that browser
> vendor's developer forum.
>
> You also asked about the possibility of adding an additional, optional
> parameter to the spec.  Typically, such feature requests begin with a
> description of the use case that it supports, as there are sometimes
> various ways to support a particular use case.  If you'd like to propose a
> feature request, please specify detailed use case(s) for them.
>
> Thanks,
> Glen
>

Received on Friday, 7 July 2017 01:21:09 UTC