[Bug 6774] <mark> element: restrict insertion by other servers

http://www.w3.org/Bugs/Public/show_bug.cgi?id=6774





--- Comment #14 from Nick Levinson <Nick_Levinson@yahoo.com>  2009-06-30 07:31:13 ---
That the Internet is outside of law has always been a myth. E.g., you're not
free to copy from Wikipedia without limit. Most of its text is protected by
licenses that reserve rights
(<http://en.wikipedia.org/wiki/Wikipedia:Copyrights>). Most or all open-source
licenses reserve rights. Moreover, many websites impose terms for use of their
sites. The legal ground generally is that the site is a chattel and use that
violates the terms is a trespass on the chattel. E.g. (without judging legal
quality of notice): Google (<http://www.google.com/accounts/TOS>, e.g.,
sections 2.1, 5.5, & 8.2 (& 8.1)) forbids "modify[ing]" Google's content. Apple
(<http://www.apple.com/legal/terms/site.html>) says ". . . make no
modifications to any such information . . . .".

Your comment that "Site owners do not have control over their sites today. They
don't have to give any control up, because they don't have it in the first
place." is only true technologically and only in part. Owners are legally
responsible for content (as for libel) and rightly so since they have major
technological control with which to meet their legal obligations. I think you
overreached on a few points and this is one.

Besides that no user or creator can have ultimate or complete control of any
content, the issue here is third-party control. Firebug, being with Firefox,
and Opera are presumptively under the user's or site creator's control. Users
presumptively have control over user style sheets and users' color settings,
B&W, TTS, platform choice, and platform-specific fonts, and these being in
those hands is good. While many of these can be misused by third parties, HTML
4.01 does not legally support that but v5 will. Thus the relevance of law in
combination with technology. Law can't be escaped.

If anyone wants new legislation from a legislature, they likely would be
computer and browser firms and retailers seeking exemption from existing law,
and volume contracts make that unlikely. Given the laws already in place, for
the specific issue at hand the proper venue is the W3C.

Of your tech points:

Yahoo can copy a doc from example.com into its own proxy or cache, store the
example.com URL with the doc, and then when the doc is copied or moved from the
cache or proxy Yahoo can report it as coming from the example.com URL. That's
what proxied networks and caching browsers do now. If you visit un.org and try
to visit it again an hour later, with normal settings your browser will
retrieve from your cache but show un.org in your address bar. Nothing from the
original URL gets into the cache or proxy without technical means to retrieve
from the original URL. Copying from an original URL is not made easier by
caching or proxying along the way. How would a cache or proxy, without more,
permit anyone to copy everyone's bank details?

Thanks for mentioning a rewriting proxy. Since it can edit a URL, that supports
my argument. Neither users nor site creators usually control proxies. A proxy
that can substitute a URL by one algorithm can probably do so by another,
becoming a third party's technical mechanism for retrieving from  a URL other
than what the user thinks, such as from a third party's cache where markup is
applied.

I withdraw the angle on scripts and forms. I think the effect on them is
essentially the same as on the rest and no more and no less an issue. I also
withdraw the $123-to-$12 type of tactic because it's most likely to succeed as
a DoS attack, itself probably illegal regardless of HTML.

Thanks.

-- 
Nick


-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.

Received on Tuesday, 30 June 2009 07:31:22 UTC