- From: Henry Story <henry.story@bblfish.net>
- Date: Mon, 10 Dec 2012 12:44:26 +0100
- To: j.jakobitsch@semantic-web.at
- Cc: public-webid Group <public-webid@w3.org>
- Message-Id: <44358B4C-BB1C-4D92-9E8A-C40423AED072@bblfish.net>
On 10 Dec 2012, at 11:44, Jürgen Jakobitsch <j.jakobitsch@semantic-web.at> wrote: > hi, > > i just occurred to me that i could probably knock out every verifier by > simply creating a huge webID-profile and logging in a couple of times. Yes, and so a good WebID verifier needs to set up a number of protections. It should set limits on the sizes of the graphs it downloads as well as the time it allows for them to be downloaded. So in RWW-Play I when I fetch a graph I get a def get(uri: Rdf#URI): Future[LinkedDataResource[Rdf]] https://github.com/read-write-web/rww-play/blob/bb4fbb70e6410a399489e8f0cd5f2a48911f20ab/app/org/www/readwriteweb/play/LinkedDataCache.scala#L40 The get method returns a Future immediately, and the result of the future can also be waited for a certain amount of time (500ms for example). > > we talked about that a while ago, but i don't find the link right now. > would it be helpful to have the public key available under an own uri > and go for the key first. the public key would need to link back to the > profile. and only in case the key is valid do a get on the profile. In RWWPlay I use the notion of a Principal as described in http://www.w3.org/2005/Incubator/webid/wiki/Identity_Interoperability I have a WebIDPrincipal and can also have a PublicKeyPrincipal or an OpenIDPrincipal. These can then all be added to the Subject as different identifiers when verified. So if the WebID verification is not ready in time, the PublicKey can already be used as an identifier. > > it's clear that one could also create a huge public key graph, but there > could be size restrictions on that graph. yes, that is very much up to the implementation. Clearly users that have huge Profiles may end up having trouble logging into sites. But I think one can determine that pragmatically, and it will depend on the technolgies used. You could imagine for example a non blocking asynchronous parser that does not even store the triples, but just queries every triple as it comes along for a pattern, and so the issue of memory consumed could be very minimal. > > any thoughts? > > wkr turnguard > > > -- > | Jürgen Jakobitsch, > | Software Developer > | Semantic Web Company GmbH > | Mariahilfer Straße 70 / Neubaugasse 1, Top 8 > | A - 1070 Wien, Austria > | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22 > > COMPANY INFORMATION > | web : http://www.semantic-web.at/ > | foaf : http://company.semantic-web.at/person/juergen_jakobitsch > PERSONAL INFORMATION > | web : http://www.turnguard.com > | foaf : http://www.turnguard.com/turnguard > | g+ : https://plus.google.com/111233759991616358206/posts > | skype : jakobitsch-punkt > | xmlns:tg = "http://www.turnguard.com/turnguard#" Social Web Architect http://bblfish.net/
Attachments
- application/pkcs7-signature attachment: smime.p7s
Received on Monday, 10 December 2012 11:45:03 UTC