神明達哉
2017-07-17 16:42:31 UTC
At Mon, 17 Jul 2017 08:58:18 +1200,
Although it's probably true that at the time of RFC4862 we didn't
expect networks with much higher packet loss to be so dominantly
deployed, I don't necessarily think the current parameter of RFC4862
is a "bug". First off, "1" is just the default of
DupAddrDetectTransmits, not a fixed constant. Secondly, my
understanding is that DAD as defined in RFC4862 (or its predecessors)
has never been considered a very reliable duplicate detection
mechanism, so it didn't bother to make it arbitrarily less unreliable
by default (when 1 is may not be enough, there's no guarantee that 3
or 4 is enough anyway). I also thought the "official answer" for a
high packet-loss environment is to use a more sophisticated mechanism
such as RFC4429.
(That said, I wouldn't necessarily be opposed to updating the default
value if and when we update RFC4862 so it will match the latest
deployments for leaf networks better).
--
JINMEI, Tatuya
I looked up RFC4862 to find out how many DAD attempts there were,
because I'd assumed 3 to 4, and if so, then if that many attempts
failed, I'd say the link is well beyond its capacity. Something would
need to be done to remedy a link capacity problem in that case.
I was surprised to find, as Ole mentioned, the number of attempts was
only 1 rather than 3 or 4.
I'm more than surprised. I think that's a bug and IMHO it should be fixed.because I'd assumed 3 to 4, and if so, then if that many attempts
failed, I'd say the link is well beyond its capacity. Something would
need to be done to remedy a link capacity problem in that case.
I was surprised to find, as Ole mentioned, the number of attempts was
only 1 rather than 3 or 4.
expect networks with much higher packet loss to be so dominantly
deployed, I don't necessarily think the current parameter of RFC4862
is a "bug". First off, "1" is just the default of
DupAddrDetectTransmits, not a fixed constant. Secondly, my
understanding is that DAD as defined in RFC4862 (or its predecessors)
has never been considered a very reliable duplicate detection
mechanism, so it didn't bother to make it arbitrarily less unreliable
by default (when 1 is may not be enough, there's no guarantee that 3
or 4 is enough anyway). I also thought the "official answer" for a
high packet-loss environment is to use a more sophisticated mechanism
such as RFC4429.
(That said, I wouldn't necessarily be opposed to updating the default
value if and when we update RFC4862 so it will match the latest
deployments for leaf networks better).
--
JINMEI, Tatuya