Swarm Update Config


#1

Hi, I am trying to build a small Swarmkit feature that needs to have some config options updated from Docker CLI, via “docker swarm update [OPTIONS]”, and I’m really confused about what structures are present or available to mod on either side of the API.

From “docker-ce/components/cli/cli/command/swarm/update.go” I have: “opts := swarmOptions{}”

which sends a “swarm.Spec” to “moby/client/swarm_update.go”

but nothing in the type description or any kind of breadcrumbs tells me how this gets received by the Swarmkit API. What I’d like is to pass a struct with some service description configuration that is then accessible to “swarmkit/manager/role_manager.go”

How does swarm.Spec get passed to and then read by Swarmkit? Should I add my own swarm.Spec.MySpec to “moby/api/types/swarm/swarm.go”? Where is it then accessed in “swarmkit/manager” package? Am I even in the right spot?


#2

Hey, Ben!

The service returned by the API is a combination of desired state and actual state.
The items that are available for update is from the ServiceSpec here, which is the desired state.
Here’s the API spec: https://docs.docker.com/engine/api/v1.35/#operation/ServiceUpdate

Note that not all fields are necessarily update-able… for instance the service mode can’t be updated.

Hope this helps. If you can link some code it may be easier to assist.
Thanks!


#3

Thanks for the quick response! I’ll link some code when I write it :wink: I wanted to make sure I wasn’t going to write something unusable, first, though.


#4

OK, so I created a custom ServiceMode in specs.proto, and left everything else stock so I could use existing TaskTemplates to define my service. I created a new orchestrator to automatically promote/demote Workers/Managers. reconcile() bypasses “slots” and reads raft state directly, but it should allow for Constraints and Preferences to determine Manager eligibility and promotion.

The bulk of the changes are to reconcile() in:

I wanna make sure I’m not barking up the wrong tree before I work on the sorting logic, and also I am hungry so this is it so far.

Thanks for taking a look!
-Ben


#5

OK, I completely reworked this to integrate RoleServices into role_manager.go, and added a scheduler/manager.go to use the existing constraint/preference logic to automatically promote Worker nodes to Manager in order to maintain a user-defined set or Managers:

swarmkit/manager/role_manager.go
swarmkit/manager/scheduler/manager.go

I’d like to make a pull request, how’s the best way about doing that?

Thanks,
-Ben


#6

I’m not sure I understand what you are attempting to accomplish.

automatically promote Worker nodes to Manager in order to maintain a user-defined set or Manager

This seems like something that violates swarmkit’s security model.

Can you explain what you are wanting to accomplish?


#7

So right now if I start up a cluster with 3 Manager nodes and however many Worker nodes, if a Manager goes down I have to manually decide which machine to promote to Manager and then, well, promote it. I can use Stacks and Services to keep, say, 3 copies of Ngnix or whatever alive and spread across the cluster, no matter what happens. As long as I manually maintain the Manager pool. I’m trying to use the same scheduling logic of Placement Constraints and Preferences to only change the roles of nodes that are already securely joined to the cluster.

None of the service orchestrators out there can orchestrate themselves. This is only a scheduler-managed version of the CLI “docker node promote NODE”, but it makes Docker/Swarmkit fully self-healing. If the underlying function is available in the command line, it shouldn’t violate security.