Hi, I am trying to build a small Swarmkit feature that needs to have some config options updated from Docker CLI, via “docker swarm update [OPTIONS]”, and I’m really confused about what structures are present or available to mod on either side of the API.
From “docker-ce/components/cli/cli/command/swarm/update.go” I have: “opts := swarmOptions{}”
which sends a “swarm.Spec” to “moby/client/swarm_update.go”
but nothing in the type description or any kind of breadcrumbs tells me how this gets received by the Swarmkit API. What I’d like is to pass a struct with some service description configuration that is then accessible to “swarmkit/manager/role_manager.go”
How does swarm.Spec get passed to and then read by Swarmkit? Should I add my own swarm.Spec.MySpec to “moby/api/types/swarm/swarm.go”? Where is it then accessed in “swarmkit/manager” package? Am I even in the right spot?
OK, so I created a custom ServiceMode in specs.proto, and left everything else stock so I could use existing TaskTemplates to define my service. I created a new orchestrator to automatically promote/demote Workers/Managers. reconcile() bypasses “slots” and reads raft state directly, but it should allow for Constraints and Preferences to determine Manager eligibility and promotion.
The bulk of the changes are to reconcile() in:
I wanna make sure I’m not barking up the wrong tree before I work on the sorting logic, and also I am hungry so this is it so far.
OK, I completely reworked this to integrate RoleServices into role_manager.go, and added a scheduler/manager.go to use the existing constraint/preference logic to automatically promote Worker nodes to Manager in order to maintain a user-defined set or Managers:
So right now if I start up a cluster with 3 Manager nodes and however many Worker nodes, if a Manager goes down I have to manually decide which machine to promote to Manager and then, well, promote it. I can use Stacks and Services to keep, say, 3 copies of Ngnix or whatever alive and spread across the cluster, no matter what happens. As long as I manually maintain the Manager pool. I’m trying to use the same scheduling logic of Placement Constraints and Preferences to only change the roles of nodes that are already securely joined to the cluster.
None of the service orchestrators out there can orchestrate themselves. This is only a scheduler-managed version of the CLI “docker node promote NODE”, but it makes Docker/Swarmkit fully self-healing. If the underlying function is available in the command line, it shouldn’t violate security.