Re: Striving for Compromise (Consensus?)

In message <CABkgnnVL+jvxLTJE9VQKK7qEFVvcKYyEFw5NSeRVvF9sTCnCZQ@mail.gmail.com>, Martin Thomson w
rites:
>On 11 July 2014 16:06,  <K.Morgan@iaea.org> wrote:
>> BTW, we still work with micro-controllers with ~100K, where 16K is a significant resource  commitment.
>
>In those environments, do you have to deal with arbitrary peers?
>Peers that might not be aware that you are so constrained?

Do they risk arbitrary peers which send requests filled with either
junk headers or cookies that don't belong ?

Not unless somebody is deliberately trying to sink them with a DoS.

Spiders are very careful to send very small requests initially,
because they get blocked if they cause trouble.

A browser pointed first time at your new $devices knows nothing
about it and simply does a "GET /" and 256 bytes is enough for
that.  It may get a SETTINGS and a page back, or it may get
a redirect to the vendors web-farm, with the serial number
in the :path or :query fields.

>Are those micro-controllers going to be using HTTP/2 with default
>configuration such that they might get 100 requests immediately after
>accepting the connection?

I don't see that happening, and if it does, it is a DoS.

A single browser would have to encounter 100 (different!) links on
a webpage to the $device for that to happen.  Somebody put those
links there, it's reasonable to expect them to have tested the page
and they should have noticed that it didn't work too well.

And if your thermostat gets to the front page of reddit, you
don't seriously expect it work handle the traffic, do you ?

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Saturday, 12 July 2014 06:57:49 UTC