W3C home > Mailing lists > Public > public-credentials@w3.org > February 2016

Autocorrect highlighting standards

From: Timothy Holborn <timothy.holborn@gmail.com>
Date: Mon, 29 Feb 2016 17:18:36 +0000
Message-ID: <CAM1Sok1TCz+oKTEfj0n4NDX7aDLXyqhOo7gmvj-JWgBpLvF0dA@mail.gmail.com>
To: W3C Credentials Community Group <public-credentials@w3.org>, public-rww <public-rww@w3.org>
This is more of an A.I. Related functional spec. Request / consideration,
but I wasn't sure where to make it. Seemed raw (autocorrected 'rww') and
credentials to be the best environments...

I'm finding it difficult to know when I've mistyped something. Vs. when the
user experience has somehow executed an 'autocorrect' function at some
stage, which has left my carefully prepare post or email look like I don't
know how to speak English an/or are sloppy.

I think it is extremely important to visually denote where bots / agents
have modified human inputs without direct intervention / approvals, and
that it is declared which agent was responsible for anysuch changes to

The implications simply for autocorrect is likely to be quite significant,
even when simply considering education, it may be found autocorrect is
responsible for poorer grades. With particular reference also to cloud
based systems, we all assume people author something intentionally and that
it is the same as they left or intended it to be, when reviewing the
document at a later date.

This is a relatively simple accountability use-case for bots. I'm sure
solutions concepts may have far broader implications.

I honestly don't know what agent is changing stuff, what I missed, what was
there before I pressed send and/or whether something changed when I did,
and it seemed to, be different on different devices with different
applications performing different tasks.

Yet, I also worry that sending another email where these experiences have
impacted an particularly important email for instance, to be a better or
worse use of a persons time. I'm also not sure where I made a mistake, vs.
where an 'agent' failed to help with, unintended and undeclared, somewhat
owners us consequences. If a bot does something on its own, it should be
made accountable for the impacts of its decisions. Can't be all care and no
responsibility, IMHO.

Timh. (Autocorrected to time, but caught it. Hopefully..?)
Received on Monday, 29 February 2016 17:19:18 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 11 July 2018 21:19:27 UTC