Allow specifying the subnet size for networks allocated from default pools#50114
Allow specifying the subnet size for networks allocated from default pools#50114akerouanton merged 1 commit intomoby:masterfrom
Conversation
|
One thing I haven't done in this PR is account for the case where only a subset of the available network pools may be able to fulfill the given preferred network size. For example, imagine pool A has a default size of /24 and pool B has a default size of /20 and the user requests a preferred size of /20. It likely makes sense to attempt to attempt to allocate an address from pool B first (even if pool A isn't fully allocated) before falling back to a /24 network from pool A if there's no available range in pool B. If a preferred network size is requested, it's likely worth ordering the available pools by default pool size before attempting to allocate a subnet to ensure that subnets that can fulfill the preferred size (or are closes to the preferred size) get precedence. |
|
Thank you! It's perfectly fine for things to be incomplete, or even "throw away code". Having something to look at, or to give it a spin, and to have the discussion never hurts! |
|
I ended up adding a step to sort the pools if a preferred size is specified (with the idea being to prefer pools that are able to accommodate the preferred size when allocating a network). My current design treats the default (or configured) pool prefix sizes as hard limits; the resulting network can have a smaller address range than the default, but never larger. For example, if a pool was configured with a default size of 24 and a request was made to create a network with a preferred prefix size of 16, any network from the pool would still be limited to a 24 prefix. One other possible approach would be to honor the preferred prefix size as long as it was valid for the underlying base network for a given pool. For example, take a default pool configured with a base network of 172.17.0.0/12 with a default allocation size of 24. If a preferred size of 20 was given, that could be honored as it still fits within the overall /12 base subnet. In that case only the base prefix size would need to be considered when evaluating networks, not the default allocation size. |
f2c7c9b to
87498f9
Compare
|
Just found a previous PR/discussion related to address pool sizes. So, linking it here, but it looks like that work stalled in April last year ... |
I need to do some work to get this ready to move out of draft; I've been mostly heads down on an upcoming release at work, but should have time again to devote to this proposal now. |
86005f7 to
08dcd59
Compare
|
I'm suspicious there'll still be a few changes warranted to this PR, but I think it's at a place where it's at ready for actual review. |
Author of #47737 here, I was not given clear instructions or guidance on the next steps. There are unresolved arguments on whether "non network-savvy users can easily read and identify /22 subnets" and that's pretty much why it stalled. On my side all required code changes have been done and all I was doing was awaiting input. |
I'd argue the two changes are complementary; I don't think anything in this PR invalidates the benefit of more reasonable defaults that increase the number of simultaneous networks available to the average Docker user. I see this PR as being more beneficial for developer tooling that interacts with Docker networking (coincidentally exactly what I work on, but I also recognize it's a more niche scenario). My main motivation with this PR is that, if I know that even 1024 unique addresses is more than a given network will require, I'd rather be a good steward of a developer's system resources and request a smaller range such as /24 (or even /25). |
|
[We discussed this PR during today's libnet maintainers call. We all agreed that the Regarding API representation, we think that it's fine to put unspecified prefixes in We considered putting the subnet size in an IPAM option at the API level — i.e. while still allowing users to specify unspecified prefixes through Regarding this comment:
We're rather erring on the side of making the subnet size a mandatory requirement the Engine has to honor, instead of an hint that the Engine may decide to ignore. The reason is that we're considering #47737 as a next step, and we'd like to take that PR even further by making the default subnet size a global config parameter instead of a per address pool parameter as is currently the case. |
Sounds good, the current implementation is fairly close to this already; I'm treating the subnet size as mandatory (when specified) and honoring it via |
|
@akerouanton with the latest commits, I believe the PR is at a place where it matches the desired behavior. |
|
I missed a unit test that was validating RequestPool with "all address" subnet requests ( |
|
Could you squash your commits please? It'd make it easier to review from an IDE. At a quick glance, I think there's no upfront refactoring, so it should probably go into a single commit. |
87a30a0 to
e2e763e
Compare
|
cc @dperny - want to take a look as well? |
d441a9b to
cc7d6ef
Compare
|
I'm assuming the remaining failing tests are likely noise or an issue with the github agent as they seem to be due to disk capacity issues on an unrelated integration test. |
|
Yes, test failures look unrelated to this change. As you say, it seems a test runner is low on space. Strange to have hit the same thing on the first two runs, but I've given it another kick. |
cc7d6ef to
57f8b6c
Compare
|
Can I trouble someone to re-run the one failing unit test? |
Looks like it's going to keep failing ... lots of stuff is broken today, but definitely unrelated to this PR. |
Yeah, us-east-1 going down does tend to have some severe knock-on effects |
|
@danegsta You have one conflict to resolve, and we should be able to merge your PR afterward. |
57f8b6c to
7c41d32
Compare
Merge conflicts are resolved (there was only a single conflicting line, so a very quick update). |
|
Spoke too soon; thought I'd run the integration tests before pushing, but looks like I only ran the unit tests. There's a few additional tweaks that'll have to happen due to other changes that didn't cause conflicts. |
…pools Signed-off-by: David Negstad <David.Negstad@microsoft.com>
7c41d32 to
ea0d934
Compare
|
Okay, fixed the issue with the return type from the new integration network inspect helpers and both unit and integration tests are passing locally for me. |
|
Thanks @danegsta! Let's get it in 🎉 |
Thanks everyone for the reviews and feedback to help get this ready to merge! |
Thank you for all the work! |
|
Docs update (reviews welcome!) ... |
- What I did
Updated how subnet arguments are processed to treat
0.0.0.0/<size>,::/<size>, or/<size>as requests for a subnet from the default pools with the given prefix size (rather than the default prefix size for the pool)Networks with both IPv4 and IPv6 subnets are supported:
docker network create --ipv6 --subnet ::/65 --subnet 0.0.0.0/24 testresults in an IPv6 subnet of size /65 and an IPv4 subnet of size /24, both from the default pools. Fixed address subnets can be combined with default pool subnets as well.Implements #50107
- How I did it
Added support for treating
0.0.0.0and::subnet addresses as requests for networks from the default pools. If a prefix length is provided as well (i.e.0.0.0.0/24or::/64), it will be used when allocating the network instead of the pool default prefix length. If no pool- How to verify it
I've added unit tests for the new behavior, but the logic can be manually verified by specifying a global subnet to
docker network create(i.e.docker network create --subnet 0.0.0.0/24 test, ordocker network create --subnet /24 test).- Human readable description for the release notes
- A picture of a cute animal (not mandatory but encouraged)