- From: Mike Schinkel <mikeschinkel@gmail.com>
- Date: Thu, 13 Dec 2007 05:42:10 -0500
- To: "'Erik Wilde'" <dret@berkeley.edu>, <uri@w3.org>
Erik Wilde wrote: > interesting, i did not know that one. but isn't that more or > less exactly like pointing to google maps and their > convention for embedding gps into query string data? Similar, but not exactly the same. Google doesn't own the physical world, but Linden Labs owns the virtual world in Second Life. So similar, but different. > we could just say that > wgs84 coordinates should be represented like this: > > http://maps.google.com/maps?ll=<lat>,<long> Something like that works very well. One approach is to set a standard for the format of "<lat>,<long>" and then get Google, Yahoo, MSN, etc. al. to support the standard in their URLs. > for most people, that would probably be good, right? just > don't ask yahoo or microsoft or anybody else. Why not ask them? Ask them all to specify their URL structure using URI Templates (assuming the final standard is something mortals can understand) and publish a list of providers and their URI Templates at http://location.org. Or for that matter, publish them at http://www.wikipedia.org/wiki/LocWeb_Service_Providers > what i want to say by that: > slurl.com conveniently works because secondlife belongs to a > company and they can do whatever they like. i hope google > have not yet reached that much influence to be able to say > they see the world with the same eyes as linden labs sees > second life... ;-) i see the "foundation argument" > coming, so what you then suggest is Very Pavlovian of you... :-) > > http://location.org/<lat>,<long> > > and location.org is owned by the good guys. i still don't > fully get the model. I've done a bit of research, and probably one of the best analogs I can find are PURLs at http://purl.org I know you know about PURLs because of: http://dret.net/glossary/purl PURLs have not seen broad use but I'd say that is because of decisions they made, not anything that would discredit the concept. Other somewhat similar examples are: wikipedia.org tinyurl.com iana.org w3.org archive.org None of those are 100% analogous but you can take the positive aspects of each and combine them to make a viable location.org concept. > - in the beginning, most browsers will actually go to > location.org and will get whatever location.org gives them > (maybe a nice page such as this > http://explorer.altopix.com/map/popvvo/283/216/Mount_Everest.htm), Why the silly example? Are we being serious in our discussions or not? > that allows you to visit various mapping services. is > location.org being paid to include providers on that page? You seem to feel that everything that is a domain must be commercial? It can be where it make sense for the good of all, but it doesn't have to be where it doesn't make sense. Yes, it would likely a list of URLs for actual providers much like DNS can boot strap the root servers. Those providers could be volunteers like the 10 root servers, if could be a list of commercial providers, and/or it could incorporate both. Imagine the a group of people from major university get together say, from UC Berkeley, University of Oldenburg, Cardiff University, University of Wisconsin-Madison, University of Zurich, and MODUL University Vienna and then form the "Geo Location Foundation" or somesuch. They create a charter and define a mission that the foundation will manage the location registry for the good of all and guard against special interests. To fund the foundataion it could request funding for an endowement in exchange for seats on the board. It could offer memberships to interested parties to be involved in ongoing enhancement, and it could charge placement fees based on those places following a set of well-defined rules. Ideally the barrier to participate would be very low to foster innovation, but the benefits for companies like Google, Yahoo, & MSN would be great that they and many others would fund the foundation in exchange for a seat on the board and consumer opt-in commercial opportunities. Location.org could become a division of the W3C or other; there are pros and cons to both being independent and associated and that discussion is out of scope as far as I'm concerned right now. > if not, how do you prevent spammers from invading that space? This is a theme you keep coming back to. You seem to feel that virtue of being a domain that it will attract spam, but spam can only end up on a domain when functionality is put in place to make spam possible. I really don't see where that is an issue for this use case. Or am I missing something? > - over time, more and more browsers will understand > location.org as a magic domain name and do some internal > mapping of that uri to whatever they think is appropriate for > locations. Yup. That's part of the idea. > - in the end, location.org can almost shut down its server, > because all web clients know the magic prefix. Yup, although shutting it down would eliminate the benefits of serving out-of-band information. > that feels just wrong to me, even though i do understand the > reasoning behind it. I can definitely appreciate how something may *feel* wrong; I've learned it often has to do what what we are used to. For example, I'm currently programming in PHP and find it nasty because of it's C-language heritage (semi-colons and braces, et. al.) But I'm learning to ignore my distaste for PHP because I'm realizing it is just my past experience that have given me a preference for line-oriented languages, and because I'm realizing there isn't anything fundamentally wrong with the PHP's syntax (there is someone fundamentally wrong with the architecture of the PHP libraries, but I digress... :) So, my 40+ years life experience has been teaching me to hold my preferences and search for the best solution based on objective criteria even if that solution doesn't "feel" right to me. And that's what I'm trying to do here, hence why I explored a variety of approaches and followed what others have said about schemes. Frnakly, a new scheme actually feels better to me to, but I'm arguing against it in deference to those who have written TAG findings against adding schemes. HTH. > i would still argue that in that case it just works > conveniently because that is a single company. It can still be a single foundation. There is a single W3C, isn't there? > i would still be interested in any example where > that happened for something that is not owned by > one company. Again: purl.org, w3.org, iana.org And from foundations: wikipedia.org, archive.org > > > I think slurls show how this approach can be practical for > spaces with > > very fine grain. Of course, the administrative domain > slurl.com is a > > relatively heavyweight construct, but it's being applied to > > identification of positions in Second Life space with > extremely fine resolution. > > i am not so much worried about the dns registration, but more > about the fact that something like a slurl probably should > not be a uri scheme mostly because it is a pointer into one > company's commercial dataset. It seems to me that that is an arbitrary distinction. As long as Second Life is valuable enough to keep a company in business, the SLURLs will live on slurl.com. In the case of location.org, the foundation founded by government and industry runs it as long as there is an Internet and as long as the concept of location is valuable which I assume, for practical purposes, will be forever. What's the difference where the dataset orignates as long as there is a standardized interface and a committment to keep it online? > what i don't quite get: if that is the way advocated by the > w3c, why not simply disallow new uri schemes, and mandate > that new "schemes" always have to be deployed in that way? You can't really stop people from doing it. Companies do it all the time because of hubris, indifference, or pure ignorance. Besides, some things are better as guidelines than as rules. After all, there may be use cases that do make sense. GeoLoc might even be one of those, if enough people can be convinced of it. > personally, i would find such a "magic domain" concept > becoming an essential part of the web architecture a bit > weird, but i seem to be the minority. Would be be willing to explore why you find it weird? I find the concept beautifully elegant, but like a programming language with a few core concepts where everything is built out of those core concepts and almost nothing is "special" (maybe that is why I like Python and dislike PHP...) In reading your emails I've been trying to figure out what fundamentally makes you so adverse to the concept on top of HTTP? I really would appreciate understanding your point of view on this. Thanks in advance. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Received on Thursday, 13 December 2007 10:42:25 UTC