W3C home > Mailing lists > Public > public-html@w3.org > June 2007

Re: please reivew mobileOK Basic Tests 1.0

From: Sean Owen <srowen@google.com>
Date: Wed, 13 Jun 2007 06:31:06 -0400
Message-ID: <e920a71c0706130331i727514ccl2355c7e389639594@mail.gmail.com>
To: "Henri Sivonen" <hsivonen@iki.fi>
Cc: public-bpwg-comments@w3.org, "public-html@w3.org WG" <public-html@w3.org>

(We will be converting all of this to last call comments by the way,
for a more "official" reply later.)

On 6/11/07, Henri Sivonen <hsivonen@iki.fi> wrote:
> The draft is premised on a vision about mobile browsing that assumes
> special mobile content. Instead of implying a separate Mobile Web, I
> think the W3C should push for one World Wide Web with mobile browsers
> that can access general Web content.

This is a central question indeed. The very existence of the Best
Practices Working Group assumes an answer to this question, that the
Mobile Web does need to be approached differently, if at all. If you
disagree with this, I think you'll disagree completely with all group

My personal view is that we will not be able to bring anything really
like the full web to a device with a tiny screen, no pointer, and no
keyboard -- regardless of server-side magic. We are talking about
low-end phones, not elite smartphones. Either you believe that the web
simply shouldn't come to such devices at all, or you believe in
approaching it differently, in designing specifically for this
context. There are in fact areas of the world where people have phones
more often than they have PCs, and so would like to think about ways
to support these users, at the least.

In the meantime, yes, let's talk about how portable we can make a full
web experience. I further do not believe that talking about designing
specifically for lesser mobile devices harms this effort in practice.
At least, I would hope to convince everyone here that the BPWG is not

> In practice, the draft is implying expectations about UA behavior.

It doesn't actually prescribe tests or expectations on user agents of
course, so I'm not worried that anyone will get too far when looking
for mobileOK user agent requirements.

But you are saying that by defining what a minimal mobile user agent
looks like, a profile which is safe to assume, and then what content
for that minimal profile is like, you in turn imply what it's safe to
assume a device needs to support -- and possibly disincentivize
manufacturers from creating more capable devices. For example, by
declaring that sites should be able to deliver UTF-8 content foremost,
you maybe send a message that it's not as important to support other
encodings. That could be so, though it's probably not true that saying
sites should only assume a display 120px wide would ever discourage
someone from making a phone with a bigger screen.

It's necessary to pick some baseline. We've tried to make it clear
throughout that there are no tests on user agents here and that the
tests merely determine whether you seem to be able to accommodate a
really minimal phone when you see one.

> When people are just hunting the badge for marketing purposes, they
> may make silly workarounds to please the testing software while
> actually making the user experience worse.

Too true. We create the badge as an incentive for following the
guidelines, to reward adoption. That's OK -- the problem comes when
passing the tests requires doing something actually harmful in the
end. I hope this doesn't happen. It's for this reason that we've been
a little more inclined to create warnings rather than failures. If
practice shows that some of the tests are causing this problem, well,
they will be fixed.

For what it's worth, the labeling part of the specification will come
later. For now, there is no real badge.

> It is rather sad that the supported image formats do not include PNG.

This follows from the fact that PNG support isn't as widespread as GIF
or JPEG -- nothing personal at all against PNG.

> These two should not apply at all to Refresh as Refresh is not used
> on the HTTP level in the real world. On the other hand, they should
> both be failures for the cache control because caching proxies should
> be able to work on the HTTP level without looking inside payload. For
> XML media types, the meta charset is always bogus so both cases
> should fail to avoid people depending on the bogus meta charset. For
> text/html, the case where the real HTTP header and the meta charset
> disagree should be a failure, because the disagreement is a symptom
> of something being wrong in the content production or serving process.

Let me defer on this to a more considered last call comment reply. I
think you have a good point here, that you want to be stricter on this
to really discourage ever relying on a different value in <meta>.

> Specs written today should probably reference CSS 2.1 instead of
> Level 1.

We'd love to -- again we're describing what it's safe to assume in
practice and not what we'd like to see. Roughly speaking, CSS 1 +
@media were deemed the right minimum to assume.

> >    For XML 1.1 [XML11] it is defined in section 1.3 as consisting
> > of the
> >    same characters with the addition of NEL (#x85) and the Unicode
> > line
> >    separator character, (#x2028).
> Surely an XML 1.1 document cannot get mobileOK approval.

I don't follow this point -- this part is just describing what is
considered whitespace for purposes of counting extraneous whitespace.

> >    In the following, note that HTTP headers should be used rather than
> >    meta elements with http-equiv attributes, which are commonly not
> > taken
> >    into account by proxies.
> The "should" should probably be a "must" for consistent results.

Yeah the next sentence says "must", and so this one should too.

> >    If any cache related header contains an invalid value, warn
> Why not fail?

Will have to get back to you on this in a last call comment but I
believe it's because, for example, Cache-Control may legitimately take
on new, as-yet-unknown values later, so we'd rather not declare them
definitely wrong. Same applies to ETag I think and Pragma but not
really to Date or Expires, and anything that's just syntactically
wrong probably shouldn't pass, so, this may well need a bit of

> > and hence this test
> >    fails if a resource cannot be encoded in UTF-8.
> s/cannot be/is not/

Yep, I think that's a good edit. The sentiment was that if it's not
encoded in UTF-8, then presumably *the site* cannot encode it as

> XML provides an unambiguous default. Is there a practical reason, due
> to broken real-world UAs perhaps, not to PASS defaulted UTF-8?

Right, we're asking content to go ahead and specify the default to
give every possible hint to the UAs, and stop them from ever trying to
second-guess the default and choose another encoding.

> >    If the HTTP Content-Type header specifies an Internet media type
> >    starting with "text/":
> This should apply to text/html.

... which begins with "text/" right?

> >    If there is no meta element with http-equiv attribute that
> > specifies
> >    UTF-8 character encoding, FAIL
> Note that the current HTML 5 draft uses an attribute called charset.
> Having a meta charset in a document that is served using an XML type
> (text/xml, application/xml and */*+xml) should probably be a warn at
> minimum considering that a charset meta in XML is bogus.

We do require that encoding be specified by something besides <meta>,
yes. Its presence may yet be helpful to user agents -- I believe we
can dig up some notes to this effect in the HTML 4 spec. It shouldn't
be relied upon exclusively, but for that reason, I think we're
reluctant to call this wrong.

> >    If the document's Internet media type is "text/html" or
> >    "application/vnd.wap.xhtml+xml", warn
> What's wrong with HTML served as text/html?

XHTML Basic 1.1 is what's assumed and required, which should be served
as application/xhtml+xml, but may be served as these other types.
That's the reasoning behind the warn.

> >    If the document does not contain a DOCTYPE declaration, FAIL
> I think the W3C should promote doctypelessness for application/xhtml
> +xml. See http://hsivonen.iki.fi/doctype/#xml
> However, documents that rely on the WAP dollar substitution must have
> a doctype that activates the dollar substitution in Opera. Still,
> relying on the dollar substitution is a bad idea.

No DOCTYPE means the document can't distinguish itself as mobile,
which goes back to the first point here I suppose. It is at the very
least required by XHTML Basic so should be kept for that reason.

Dollar substitution -- you're referring to WML? that's out of scope.
Only XHTML Basic 1.1-like documents here.

> >    If it does not validate against the XHTML-MP 1.2 DTD, FAIL
> The spec is lacking sufficient guidance on how to validate an HTML
> document against an XML DTD. Should perhaps an HTML5 parser be used
> with a DTD validator that is decoupled from an XML parser?

Which spec do you mean... I don't think we're defining how to validate
anything, but rather referring to the usual XML validation against
defined DTDs.

> Requiring content to validate against a mobile profile DTD does not
> promote the unity of the World Wide Web.

Yeah, this goes back to the same point at the top.

> >    If the response specifies an Internet media type that is not
> >    "text/css", "image/jpeg" or "image/gif", FAIL
> Is there a good reason to exclude PNG?

Here, it's because the DDC isn't defined to assume PNG support, and
the reason for that is noted above.

> >    If the element's value attribute is missing or empty, and an
> > inputmode
> >    attribute is not present, warn
> This seems excessive as it is quite likely that things will be just
> fine without content micromanaging the input mode on the UA.

Fair enough -- this was enshrined as a Best Practice for better or
worse. There are clearly some arguments for it. For our purposes here
(mobileOK) we're basing what we do on Best Practices.

> This is a bad idea because it encourages badge hunters to include
> bogus alt text that actually harms accessibility. Tests like this
> only lead to an arms race where the placeholder data always gets a
> step more complicated than what the testing tools can detect as a
> placeholder.

The alternative is to not test for this, I suppose, and it extends
into an argument against alt entirely. If you're not going to
encourage people to use it, for fear of abuse, shouldn't it be

> del and ins are legitimate in both HTML 4.01 and in the current HTML
> 5 draft. menu is legitimate in HTML 5.

But they are not in XHTML Basic!

> >    If the document contains any b, big, i, small, sub, sup or tt
> >    elements, warn
> These elements are relatively common and harmless in practice. This
> warning seems excessive.

<b> and <i>, yes. I agree. I have no doubt this will trigger a lot of warnings.

Many thanks, and like I said these will all be recorded in the last
call comments and further circulated.

Received on Wednesday, 13 June 2007 10:31:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 9 May 2012 00:16:00 GMT