47

On the vagaries of init systems

 5 years ago
source link: https://www.tuicool.com/articles/hit/YV7z63A
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

When I started working on Dinit I had only a fairly vague idea of the particulars of various other init systems, being familiar mainly with Sys V init and to a lesser extent, Systemd and Upstart (the latter of which has more-or-less vanished off the face of the earth). At that stage it was a purely personal project and I didn’t count necessarily making it public; as time went on I heard lots of complaints about Systemd, which has become the choice of init system of many distributions; I did a little research on some other systems – enough to satisfy myself that Dinit filled a worthwhile niche – and then made an announcement that I was planning to develop it into a(nother) complete init /service manager that could potentially compete with Systemd.

Around that time, I also wrote a short document trying to summarise the differences between a number of extant systems, or at least between them and Dinit, and included this in the documentation of Dinit (as part of the source tree). However, the time has perhaps come to write a more comprehensive treatment examining the differing design choices of various systems; hence, this post. Hopefully I can give an interesting overview of some design decisions that are made in a service manager, highlight specific features of various particular pieces of service management software, and give some incidental background on why I’ve made the choices I have in the design of Dinit (though I’ll to try to keep this from being too Dinit-focused).

Recap: supervision system vs service manager vs system manager

The various terms – supervision, service manager, system manager – sometimes get thrown around a little loosely, but for my purposes here it’s better to have a clear distinction between them. Without further ado:

Supervision system: a process or means for supervising service processes, providing a means to start and terminate individual services and perhaps to automatically restart them if they terminate unexpectedly.

Into the category of supervision system falls the likes of daemon-tools , runit and S6 . Note that a supervision system need not be made up of just a single process: it might supervise individual service processes using separate supervisor processes, for example. Also, an active “service” might not necessarily correspond to a running process (for example a “network” service could be made active by executing a script which terminates after the network interfaces are configured).

The next category is that of service manager :

Service manager: a process or means for starting or stopping services which have dependencies from and to other services, such that the dependencies of a service must be started before the service itself is started, and the dependents of a service should be stopped before the service itself is stopped.

So, compared to a supervision system, this adds the concept of dependency management. Some might disagree that “service manager” should entail dependency handling, but for our purposes here it’s useful to have a convenient name for such a distinction, so we make the separation – dependency-handling service management versus individual service supervision.

Note that it may be possible to implement a service manager as an additional component on top of a separate supervision system – for example, S6-RC and Anopa both implement service management over the S6 supervision system.

This brings us to the final category:

System manager: a process (or processes) responsible for controlling system startup, shutdown, and other system-level actions.

A system manager typically has to arrange for the bring-up and stopping of services, which it may do by also being – or by delegating to – a supervision system or service manager. A system manager includes an init process which is launched by the kernel as the first userspace process at boot.

It’s worth noting at this point that, while a service manager built on a supervision system typically requires tight coupling with the other system – it needs to know the specific details of how to start and stop services, and to observe changes in service state – a system manager can, in comparison, maintain quite a loose coupling; it only needs to tell the supervision system (or service manager) to start, and to stop, and can leave the handing of individual services to the supervisor’s care.

I should add that different systems use different terminology for what Systemd calls “units”, the basic concept of a thing that can be started and stopped and can have dependencies on other units. In Systemd terminology, a “service” and a “target” are different types of unit. Other systems just stick with “service” for everything, regardless of whether there’s a process or other functionality attached. The distinction isn’t particularly useful here, so I’ll use the terms unit , target , and service more-or-less as synonyms.

Pure supervision as service management

In my definitions above, I outlined the primary distinction between supervision systems and service managers as being.a question of dependency management.

However, a system where services technically have interdependencies can work with a supervision system that doesn’t manage dependencies. In the most basic form, it’s possible to rely on the fact that a service will naturally fail if its dependencies are not satisfied; it should then be restarted (ideally with a gradually increasing delay) by the supervisor, until the dependency itself has become available.

It may also be possible to explicitly start any dependencies as part of a service’s startup script (and optionally also stop known dependents as part of a stop script). The runit documentation suggests:

  • before providing the service, check if all services it depends on are available. If not, exit with an error, the supervisor will then try again.
  • optionally when the service is told to become down, take down other services that depend on this one after disabling the service.

Certainly this can work. Although in general checking for dependencies being available prior to starting is prone to a race condition (nothing prevents a dependency from stopping just after the check is made), this seems unlikely to be a common problem in practice.  In fact the joint technique outlined above allows a quite simple supervision system to provide much of the functionality associated with a service manager, provided that the dependencies are correctly encoded in the start/stop scripts.

However, that niggling race condition remains. For services which, for whatever reason, won’t behave as we want them to when dependencies are (or become) unavailable, this could potentially be problematic. Is it a stretch to claim that such services may in fact exist? Maybe it is, though I’m not particularly willing to vouch that various web app frameworks won’t lock themselves up if the DBMS becomes unavailable for a little too long, for example.

There’s also the fact that continuously polling to start services will consume system resources (only very little, if the “check for dependencies first” approach advocated by the runit documentation is followed; perhaps a significant amount if it’s not). It may also make noise in log files: service X can’t start, service X still can’t start, …, and so on. And a polling approach means that, when the dependencies of some service do become available, there may be a little delay before the service itself starts: the supervisor has to decide to try and start it again, and has no cue to do this over than by some timer expiring. These by themselves are minor issues, of course.

One advantage of proper dependency-handling service management is that you can usually query the system for dependency information (“what other services will need to be started in order to start service X?”, “what is the total set of dependencies for service X?”, etc).

Laurent Bercot, S6-RC author, gives his own argument for dependency management:

The runit model of separating one-time initialization (stage 1) and daemon management (stage 2) does not always work: some one-time initialization may depend on a daemon being up. Example: udevd on Linux. Such daemons then need to be run in stage 1, unsupervised – which defeats the purpose of having a supervision suite.

This seems a fair point and a good example, though I’m not sure it would be impossible to supervise even udevd in a supervision-only system (even if it might require tweaking the existing systems a little).

I’m certainly in favour of dependency-managing systems (and of course Dinit is such a system), though I’m aware the arguments for it may sound a little wishy-washy, and to some degree it’s a matter of personal preference.

Complexity level of dependency relationships

Different service managers provide different dependency configuration options, with differing levels of complexity.

At the most simple end, S6-RC offers only a single type of dependency: that is, a service can depend on another, and will not start unless the other starts first. However, it appears to be unusual in this regard. Many systems have the concept of a soft dependency – which should be started with a dependent, but for which failure should not cause the dependent to also fail. The “hard” and “soft” dependencies are termed differently in different systems ( needs , requires , depends-on vs wants, waits-for ).

The benefit of a soft dependency is essentially that you can enable a service but not have its failure prevent your system from booting due to the rollback that results (assuming that the system performs such rollback; discussion of activation model and rollback yet to come).

OpenRC has both a needs and a uses / wants relationship (“uses” vs “wants” in this case have different semantics depending whether the dependency has been enabled in the current runlevel; most other service managers have largely done away with the concept of runlevels).

Nosh has requires and wants relationships, and separately supports start ordering relationships ( before / after , indicating that another service’s start/stop should be ordered with respect to this service, even if there is no dependency between them). Nosh dependencies can be specified in both directions (this service requires that service, this service is required-by that service). It also has a conflicts relationship: if one service is started it can force another to stop, and vice versa.

Systemd is a law unto itself, with more dependency types than you can count on one hand; consider it as Nosh++ (though I believe Systemd came first, and Nosh borrowed from it, rather than the other way around). It’s not clear how commonly useful most of the dependency types are, though they were presumably implemented with reasons in mind.

For Dinit, I eventually opted for three dependency types: depends-on (requires), waits-for (wants), and depends-ms (depends as a milestone; the dependency must start for the dependency to start, but once started it effectively becomes a waits-for dependency). The latter, depends-ms, is of somewhat dubious value and may be removed if I cannot find a compelling scenario for it. In my eyes three dependency types (or even better, two) is a nice middle ground giving good functionality with relatively low complexity.

Systemd documentation mentions the common requirement for a dependent to start only once the dependent has properly started:

It is a common pattern to include a unit name in both the After= and Requires= options, in which case the unit listed will be started before the unit that is configured with these options.

I do not see any compelling reason for having ordering relationships without actual dependency, as both Nosh and Systemd provide for. In comparison, Dinit’s dependencies also imply an ordering, which obviates the need to list a dependency twice in the service description.

Activation model of service managers

Suppose that we have two services – A and B – and that the first depends on the second. When A is started, B will also be started. The question is: what if A is then stopped?

There are two somewhat reasonable answers:

  1. Since the action was to start and stop a single service, the state of all services should return to what it was before either action. B should therefore stop, since it has not been explicitly started (i.e. rollback should occur naturally).
  2. Services should start, or stop, only when required to do so. Since B started when A was started, and has not been required to stop, it should not stop.

I believe that most systems take the 2nd approach, but Dinit takes the first (and tracks which services have been explicitly activated versus which have only started due to being required by a dependent).

I am not certain that either approach is definitely better than the other. The first provides a nice consistency for the scenario described (starting and then stopping a service will generally return the system to the original state), and avoids potentially leaving unneeded services running; the second on the other hand reduces overall service transitions.

Advocating for the first approach, one benefit is that it is simple to emulate runlevels. If you set up each runlevel as a service (target, unit) which depends on the services that should run in that runlevel, then you can “switch runlevels” by starting the new runlevel service and stopping the old one. There is no need to explicitly set any services to stop: if they are not required by the current active runlevel, they will stop anyway (although additional services can always be activate via an explicit command).

(Compare to Systemd’s approach to runlevels: it implements a separate command, “isolate”, to deactivate services not belonging to the new runlevel).

Also, with the first approach, boot failure is detectable as all services stopping without having received a shutdown command. That is, “boot” is a service with dependencies; if one of the necessary dependencies fails to start, “boot” will also fail, and at that point it releases all other (successfully started) dependencies, so that they then stop. There is no need to have “special” knowledge of the boot service, or to have a special failure case for that particular service. This is arguably just an implementation detail, though.

Now advocating for the second approach: consider the case of repeatedly attempting to start a service which has several dependencies, but which is failing due to a configuration issue: the administrator tries to start the service, and watches as its dependencies start and then stop again since the service itself failed to start. They then attempt to repair the configuration, but do not succeed, and on attempting to start the service again see the dependencies bounce up and then down a second time (let’s hope they get it right the third time…). This would be avoided with the second approach, since the dependencies would simply remain active when the service failed to start.

The problem described above could probably be avoided, even with the first approach, in various ways, but any solution would no doubt add a little more complexity to the system.

I personally still find the first model more natural and compelling – but again, it’s arguably just personal preference.

Special targets

Some systems have special targets with special semantics. Often certain targets are started to perform, or as part of, particular system actions: a shutdown target can be started when the system is to shut down, for example. Systemd has a large list of special targets, including targets that get created by Systemd when certain hardware is detected, and targets to represent mount points, which Systemd has special handling for.

Systemd also adds dependencies automatically to or from special targets. For the basic target:

systemd automatically adds dependency of the type After= for this target unit to all services (except for those with DefaultDependencies=no ).

And for the dbus.socket unit:

A special unit for the D-Bus system bus socket. All units with Type=dbus automatically gain a dependency on this unit.

(The dbus unit is for launching the D-Bus daemon, and causes Systemd to connect to the bus after the unit starts. Systemd and D-Bus are somewhat intertwined; D-Bus has the ability to start service providers by communicating with Systemd, and Systemd exposes various services via D-Bus, as well as being able to determine that a service is ready via a D-Bus name becoming available).

Other service managers don’t tend to have as many special targets. Nosh documents a few in its system-control man page , but not as many as Systemd, and it has no special relationship to D-Bus for example. Dinit uses boot as the default service to start, but otherwise does not treat that service specially in any way; other design choices (such as the activation model) made special treatment unnecessary.

Service description/configuration mechanism

A number of supervision/service managers have gone with a “directory-per-service” approach (which I think perhaps was pioneered by daemon-tools? I’m not sure). In the directory you have a script used to run the service, some files which each contain a parameter setting, and perhaps a subdirectory containing links to dependencies. (That’s a broad stroke; many of the systems have subtle differences. S6-RC dependencies are listed one-per-line in a “dependencies” file for example). The benefit of having one-setting-per-file is that it requires no parsing and makes the system simpler. The downside is that it is a little bit more complicated to easily check the whole service configuration (though tooling can help).

Other systems – including the venerable Sys V init, as well as OpenRC – simply have a script per service. In the case of OpenRC, the script (optionally) has a special interpreter, openrc-run , which offers dependency handling functions. Various metadata is extracted from the scripts (and cached in a separate database).

Dinit, and Systemd, both use a single file per service (“.ini” style). I find this more convenient for editing service descriptions generally; the downside is that parsing is required. In the case of Systemd running as system manager, this means parsing in the PID 1 process, which many would frown upon. I’m not convinced this is really a big problem; Dinit’s configuration parser is quite simple and has proved robust (in my own use) – though it’s worth noting that Dinit doesn’t demand that it runs as a system manager (PID 1), whereas Systemd does expect this (“Note that it is not supported booting and maintaining a full system with systemd running in --system mode, but PID not 1″).

S6-RC is unusual in that it requires the service descriptions to be compiled into a database. OpenRC, as mentioned, also stores service metadata separately to the service script, but only as a cache. In either case, I suppose it is potentially possible for the compiled data and the source to become inconsistent, though I doubt it is much of a problem in practice.

Monolithic vs modular process design

One question around the design of a supervision/service/system manager is, how many processes should make it up? A number of the smaller and simpler systems have gone for the approach of breaking things up into many processes. Taking S6-RC as a case in point, the service manager (S6-RC) is separate to the main supervision process ( s6-svscan of S6) which in turn runs supervisor processes ( s6-supervise ) which, finally, run the service process. Typically the service process is launched via an execline script, which allows calling various chain-loading subprograms to set up environment, UID/GID, etc.

The idea behind breaking things up this way is, essentially, that it allows each component to be small, simple, and “obviously correct”. There are those who argue that this approach fits the “unix philosophy” of “do one thing and do it well”. This is not an entirely bogus argument; by limiting the function of an individual program, it’s somewhat easier to make sure that the program is fundamentall correct.

On the other hand, composing multiple small programs into a more complex system still results in, well, a more complex system. If the functions of a system can easily be decomposed into separate processes, they can most likely be decomposed to individual modules within a single-process program as well. (And, having multiple processes comes with its own disadvantages: certain system-level functionality is only going to be possible to implement by communicating between modules; if the modules are separate processes, that means inter-process communication, and in general that’s going to increase complexity significantly. This might not prove to be a problem for a service manager, though, if the need for such communication is really limited).

The main point that I am trying to make is that breaking functionality into separate processes does not make the overall system any simpler. It may offer an advantage in terms of making it possible to use the individual components separately, but it’s not clear to me that this is really useful. Probably the main real benefit is, potentially, an increase in robustness: if one of your various sub-processes does crash, it won’t necessarily bring down the whole system.

Enter Systemd into the discussion. Systemd insists on incorporating not only service management and supervision into a single process, but system management as well: it wants to run the whole thing as PID 1, a process which, if it crashes, causes the kernel to panic (at least on Linux) and thus really does bring the whole system tumbling down.

For Dinit, in comparison, I felt no concern about having just service management and supervision all in a single process. And in fact, Dinit does support running as a system manager, within the same process – but it does not require this; Dinit’s quite happy to act as a system-level service manager but have another process be the system manager. Additionally, Dinit is just generally far simpler than Systemd (as should be clear by now).

Some people are always going to prefer breaking things up into processes that are essentially as small as possible: I can understand this to an extent, I just don’t agree that it’s always a worthwhile goal, and I don’t think that Dinit suffers from being less modular than many of the alternatives.

Robustness and failure modes

The decision to write important system-level software in non-memory-safe languages such as C and C++ has been criticised. Yet, such software continues to be written in such languages (although certain other options such as Rust and Go have been gaining traction recently).

One of the systems I haven’t mentioned up this point is GNU Shepherd ; mainly, my concern is that it’s written in Guile, an interpreted (or bytecode-interpreted) language with garbage collection – and I see both the “interpreted” and “garbage collection” parts as undesirable for system-level software (especially for a potential init ). Interpreted software will be less efficient (if not in actual speed, since I’ll acknowledge that JITs can do amazing things, at least in memory usage) and garbage collection presents a similar issue. If the software was so complex that we couldn’t make it robust without using a memory-safe language/runtime – and if we weren’t willing to use Rust or another GC-less option for some reason – then perhaps the use of GC would be acceptable, but I don’t believe that’s actually the case; Dinit has so far proven to be robust, and even Systemd, despite early foibles, rarely actually crashes (even if it fails in other ways, as occasional rumbles on the web suggest).

A real concern of GC’d languages generally is, can programs in these languages be made resilient to out-of-memory conditions (are allocations even always explicit)? I haven’t looked closely enough at Shepherd to be able to pass comment, but I would not be surprised if it turned out that memory allocation failure is not something it is designed to handle (I’d be happy to be shown otherwise). Despite the low probability of an out-of-memory situation occurring, I still think it’s something that a service manager – and especially a system manager – needs to be able to deal with.

Conclusion

Well, that ends our tour of concerns. If you got this far – thanks for reading, and I hope it was interesting and informative. There are of course a lot of other aspects of service manager design – and some unique features of particular systems – but this article has gotten quite long already. Please feel free to add constructive comment, correction or discussion.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK