I should start out by saying that because OpenStack is an open source project, it is hard to know exactly what will land in Havana — the developers are volunteers, and sometimes things get in the way of them doing the work they intended. However, these are the notes I wrote up on the high points of the summit for me — I didn’t see all the same sessions as other nova developers, so hopefully others will pitch in with their notes as well.
Scheduler
The scheduler seems to be a point of planned work for a lot of people in this release, with talk about having more scheduling code in the common library, and of adding new filter types. There is definite interest in being able to schedule by methods we don’t currently support — things like rack or PDU diversity, or trying to collocate a tenants machines together. HP is also interested in being able to sell dedicated machines to tenants — in other words, they would guarantee that only one tenants instances appeared on a machine in return for a fee. At the moment this requires setting up a host aggregate for the tenant.
Feeding additional data into scheduling decisions
There is also interest in being able to feed more scheduling information to the nova-scheduler. For example, ceilometer intends to start collecting monitoring data from nova-compute nodes, and perhaps it might inform nova-scheduler that a machine is running hot or has a degraded RAID array. This might also be the source of PDU or CRAC failure information which might affect scheduling decisions — these later two examples are interesting because they are information where it doesn’t make sense to get it from the compute node, the correct location for this information is a data center wide system, not an individual machine. There is concern about nova-scheduler depending on other systems, so these updates from ceilometer will probably be advisory updates, with nova-scheduler degrading gracefully if they are not present or are stale.
Mothballing
This was almost instantly renamed to “shelving”, but “swallow / spew” was also considered. This is a request that Rackspace sees from customers — basically the ability to stop a virtual machine, but keep the UUID and IP addresses associated with the machine as well as the block device mapping. The proposal is to implement this as a snapshot of the machine, and a new machine state. The local disk files for the instance might get deleted if the resources are needed. This would feel like a reboot of an instance to a user.
This is of interest for workloads like “Black Friday” web servers. You could bring a whole bunch up, configure security groups, load balancers, and the applications on the instances and then shelve the instance. When you need the instance to handle load, you’d then unshelve the instance and once it was booted it would just magically start serving. Expect to see shelves instances be cheaper than a running instance, but not free. This is mostly because IP addresses are scarce. Restarting a shelved instance might take a while if the snapshot has to be fetched to a compute node. If you need a more “instant on” bursting capacity, then just leave instances idling and pay full price.
Deferred instance file delete
This is a nice to have requirement for shelving instances, but it is useful for other things as well. This is the ability to delay the deletion of instance files when an instance is torn down. This might end up being expressed as “keep these files for at least X days, unless you are tight on disk resources”. I can see other reasons this would be useful — for example helping support people rescue data from instances users tore down and now want back. It also defers the disk IO from deleting the files until its absolutely necessary. We could also perhaps detect times when the disks are “relatively idle” and use those to clean up file systems.
DNS in nova-network
Expect to see the current DNS driver removed, as no one uses it as best as we can tell. This will be replaced with a simpler drive in nova-compute and the recommendation that deployers use quantum DNS if possible.
Quantum
There is continued work of making quantum the default networking engine for nova. There are still some missing features, but the list of absolutely blocking features is getting smaller. A lot of discussion centered around how to live upgrade clouds from nova-network to quantum. This is not an easy problem, but smart people are looking at it. The solution might involve moving compute nodes over to quantum, and then live migrating instances over to those compute nodes. However, we currently only support one network driver at a time in nova, so we will need to change some code here.
Long running periodic tasks
There will be a refactor of the periodic task code in nova this release to move periodic tasks which incur a lot of blocking IO into separate processes. These processes will be launched by nova-compute, and not be cron jobs or something like that. Most of the discussion was around how to do this safely (eventlet makes it exciting), which is nice in that it indicates some level of consensus that this is needed. The plan for now is to do this in nova-compute, but leave other nova components for later releases.
Libvirt changes
Libvirt is the compute driver I work on, so it’s the only one I want to comment on here. The other drivers are doing interesting things as well, I just don’t want to get details wrong by not understanding their efforts.
First off, there should be some work done on better console logging in Havana. At the moment we use an unbounded file on disk. This will hopefully become a Unix domain socket managing a ring buffer of some form. The Unix domain socket leaves the option open of later making this serial console interactive, but that’s not an immediate goal.
There was a lot of talk about LXC support, and how we need to support file system attachments as well as block devices. There is also some cleanup that can be done for the LXC support in the libvirt to make the code cleaner, but it is not clear who will work on this.
imagebackend.py will probably get refactored, but in ways that don’t make a big difference to users but make it easier to code against (and therefore more reliable). I’m including it here just because I’m excited about that refactor making this code easier to understand.
There was a lot of talk about live migration and the requirement for ssh between compute nodes. Operators don’t love that compute nodes can talk to each other, but expect Havana to include some sort of on demand ssh key management, and a later release to proxy that traffic through something like nova-conductor.
Incremental backups are of interest to deployers as well, but there is concern that glance needs more support for chains of images before we can do that.
Conclusion
The summit was fantastic once again, and the Foundation did an awesome job of hosting it. It was however a pretty tiring experience, and I’m sure I got some stuff here wrong, or missed things that others would consider important. It would be cool for other developers to write up summaries of what they saw at the summit as well.