James Page
on 30 April 2015
Here are some selected highlights from this most recent charm release.
OpenStack Kilo support
As always, we’ve enabled charm support for OpenStack Kilo alongside development. To use this new release use the openstack-origin configuration option of the charms, for example:
juju set cinder openstack-origin=cloud:trusty-kilo
NOTE: Setting this option on an existing deployment will trigger an upgrade to Kilo via the charms – remember to plan and test your upgrade activities prior to production implementation!
Neutron
As part of this release, the team have been working on enabling some of the new Neutron features that were introduced in the Juno release of OpenStack.
Distributed Virtual Router
One of the original limitations of the Neutron reference implementation (ML2 + Open vSwitch) was the requirement to route all north/south and east/west network traffic between instance via network gateway nodes.
For Juno, the Distributed Virtual Router (DVR) function was introduced to allow routing capabilities to be distributed more broadly across an OpenStack cloud.
DVR pushes alot of the layer 3 network routing function of Neutron directly onto compute nodes – instances which have floating IP’s no longer have the restriction of routing via a gateway node for north/south traffic. This traffic is now pushed directly to the external network by the compute nodes via dedicated external network ports, bypassing the requirement for network gateway nodes.
Network gateway nodes are still required for snat northbound routing for instances that don’t having floating ip addresses.
For the 15.04 charm release, we’ve enabled this feature across the neutron-api, neutron-openvswitch and neutron-gateway charms – you can toggle this capability using configuration in the neutron-api charm:
juju set neutron-api enabled-dvr=true l2-population=true \
overlay-network-type=vxlan
This feature requires that every compute node have a physical network port onto the external public facing network – this is configured on the neutron-openvswitch charm, which is deployed alongside nova-compute:
juju set neutron-openvswitch ext-port=eth1
NOTE: Existing routers will not be switched into DVR mode by default – this must be done manually by a cloud administrator. We’ve also only tested this feature with vxlan overlay networks – expect gre and vlan enablement soon!
Router High Availability
For Clouds where the preference is still to route north/south traffic via a limited set of gateway nodes, rather than exposing all compute nodes directly to external network zones, Neutron has also introduced a feature to enable virtual routers in highly available configurations.
To use this feature, you need to be running multiple units of the neutron-gateway charm – again it’s enabled via configuration in the neutron-api charm:
juju set neutron-api enable-l3ha=true l2-population=false
Right now Neutron DVR and Router HA features are mutually exclusive due to layer 2 population driver requirements.
Our recommendation is that these new Neutron features are only enabled with OpenStack Kilo as numerous features and improvements have been introduced over the last 6 months since first release with OpenStack Juno.
Initial 0mq support
The 0mq lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products, without the requirement for a centralized message broker infrastructure.
Interest and activity around the 0mq driver in Oslo Messaging has been gathering pace during the Kilo cycle, with numerous bug fixes and improvements being made into the driver code.
Alongside this activity, we’ve enabled 0mq support in the Nova and Neutron charms in conjunction with a new charm – ‘openstack-zeromq’:
juju deploy redis-server
juju deploy openstack-zeromq
juju add-relation redis-server openstack-zeromq
for svc in nova-cloud-controller nova-compute \
neutron-api neutron-openvswitch quantum-gateway; do
juju deploy $svc
juju add-relation $svc openstack-zeromq
done
The 0mq driver makes use of a Redis server to maintain a catalog of topic endpoints for the OpenStack cloud so that services can figure out where to send RPC requests.
We expect to enable further charm support as this feature matures upstream – so for now please consider this feature for testing purposes only.
Deployment from source
A core set of the OpenStack charms have also grown the capability to deploy from git repositories, rather than from the usual Debian package sources from Ubuntu. This allows all of the power of deploying OpenStack using charms to be re-used with deployments from active development.
For example, you’ll still be able to scale-out and cluster OpenStack services deployed this way – seeing a keystone service deploy from git, running with haproxy, corosync and pacemaker as part of a fully HA deployment is pretty awesome!
This feature is currently tested with the stable/icehouse and stable/juno branches – we’re working on completing testing of the kilo support and expect to land that as a stable update soon.
This feature is considered experimental and we expect to complete further improvements and enablement across a wider set of charms – so please don’t use it for production services!
And finally…
Alongside the features delivered in this release, we’ve also been hard at work resolving bugs across the charms – please refer to milestone bug report for the full details.
We’ve also introduced features to enable easier monitoring with Nagios and support for Keystone PKI tokens as well as some improvements in the failure detection capabilities of the percona-cluster charm when operating in HA mode.
You can get the full low down on all of the changes in this release from the official release notes.