Category Archives: Uncategorized

CloudStack 4.0.0 – released

Moving to the ASF has been an interesting process and quite a whirlwind. In addition to all of shuffling of infrastructure to the ASF, migration of codebases and licensing there’s been an amazing influx of participation in the project.

Of course in addition to all of this shuffling of resources – we still have the business of running the project, producing software, and hopefully getting a release ready. Of course in getting a release ready there are plenty of new policies and guidelines to be complied with; and CloudStack is a pretty established and large codebase, so plenty of things to audit and consider and a few things needing actual change. We finally (as you hopefully have heard) had a release that met those standards and was approved by both the community and the IPMC. This is noteworthy in and of itself, but of course why many were working on getting the codebase into shape, others continued to work on their desired features – and it astounds me at the number of things that happened concurrently:

Of course this isn’t an exhaustive list – you can see that in the Release Notes.

  • Inter-VLAN routing (aka VPC)
  • Site-to-site VPN
  • Local storage for data volumes
  • Nicira NVP SDN support
  • Object storage via Caringo
  • CLVM support (returns)
  • Ceph (RBD) support for KVM

Of course – you know you want all of this goodness –  and fortunately the 4.0.0-incubating release is available you can download it and begin using it today.


Secondary storage scalability configuration and tuning in CloudStack

Someone asked about this recently on the cloudstack-users mailing list, and I figure this is both an interesting design point of CloudStack as well as a specific piece of knowledge that others will find useful.

So first, lets talk about what Secondary Storage is – especially if you are not familiar with CloudStack’s hierarchy. Secondary storage is where templates (aka non-running disk images) and snapshots are stored. The  Secondary Storage VM (SSVM) handles a number of things, most importantly shuttling images and snapshots between various primary storage (where running disk images are stored) resources and the secondary storage resources. The SSVM also handles receiving uploaded disk images and aging/scavenging snapshots. The SSVM is really just some CloudStack agent software running atop a Linux VM, but bears the name SSVM because of the functionality that it is providing.

So why run this storage management service  as a stateless virtual machine instead of incorporating it into some centralized all-in-one management server? First your management server might not be network-proximate to your secondary storage resource. Particularly if you have multiple geographically disparate datacenters with resources being managed in just a few locations. Secondly, all-in-one isn’t very scalable. With the stateless ‘worker VMs’ CloudStack can dynamically scale the number of VMs up to deal with load. The minimum is that a single SSVM will exist in each zone. CloudStack uses two different configuration options to determine when to scale up and add more:

  • secstorage.capacity.standby
  • secstorage.session.max

secstorage.capacity.standby is the minimal number of command execution sessions that system is able to serve immediately. secstorage.session.max is the max number of command execution sessions that a SSVM can handle. By manipulating these settings (making the former higher and latter lower) you can trigger a faster automatic scale at lower load, or you can let the defaults do their job.

Of course that’s the new-fangled cloud way of doing things – just add more workers to do the work, but that’s not the only way of tuning or making your SSVM more scalable.

First, you can just make the SSVM bigger. The default service offering is for a 500Mhz, single vCPU, 256MB of RAM machine. That is pretty small, and works well for environments where change isn’t high, snapshots are infrequent, and the environment itself is small. You can of course edit that service offering or define a new one to be beefier. If you define a new offering you need to configure that with the secstorage.service.offering parameter.  Not only can you scale out, but you can also scale up.

Of course, nothing really trumps architecting things right in the first place. It’s pretty trivial to throw up a cloud computing environment with a single NIC on every host and letting management, guest, public, and storage traffic traverse a single physical network. And there isn’t inherently anything wrong with that, but if you really need to maximize efficiency, you can always have a dedicated storage network, and even better, enable jumbo frames on your networking hardware as well as your hypervisors, storage hardware – and yes your SSVM as well – you can configure that on the SSVM with the secstorage.vm.mtu.size parameter.

Even if you don’t have intense levels of snapshot work and template work, you may still want to tune your SSVM to make it more time-efficient.


Even more transitions – github

CloudStack has maintained some sort of presence on github for some. In its earliest days this was merely a mirror of the primary git server at cloud.com. Gradually more and more code made its way to the github account, including code for the knife-cloudstack plugin, and a number of other interesting tools. Back in April CloudStack joined the ASF’s Incubator, and thus lots of resources need to transition to the ASF’s infrastructure. The existence of a CloudStack github account, particularly with a repo for the original CloudStack code also created some confusion as to where one was to get the canonical source for CloudStack code. Today that github account name has changed  from CloudStack to CloudStack-extras (in the spirit of apache-extras.) – and there are disclaimers and pointers to the new home of the project. The original copy of the CloudStack codebase remains. (It’s important to realize that CloudStack was originally GPL, and thus keeping the source available remains a responsibility, which is further complicated since only the two latest of the 195 branches were moved to the ASF.)

Additionally the primary server that used to house CloudStack code (git.cloud.com) has been retired.

The canonical source code repo for CloudStack is (and has been for sometime):
https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git

Of course we aren’t done migrating all of the resources, more of that coming soon.

 


Open Source Cooperation

I’ve been participating in various open source communities for several years, but I am still occasionally blown away at how cooperative and efficient open source can be sometimes. 

The most recent example came with some of today’s news about the 0.4.9 release of libvirt-java. libvirt-java has historically been LGPL which due to ASF licensing guidelines meant that  folks working on Apache CloudStack couldn’t include KVM support part of the default build. The reason behind this is well grounded – people have an expectation that releases from an Apache project have licenses that are more permissive than even the LGPL. 

We discussed multiple ways of getting around this, including getting approval to make convenience builds which contained a non-default build option and contained libvirt-java as a dependency. One of CloudStack’s committers, Wido den Hollander, who is also active in the libvirt community communicated the various processes and struggles that we were going through with regards to KVM support. Within a few hours Daniel Veillard sent an email to all of the libvirt-java contributors asking them to consent to a relicense to MIT. Now just over a month later that work is complete and libvirt-java is MIT licensed. 

I’ve been party to more than one relicensing effort, and they are always painful, time consuming, and some of the most boring work imaginable. To the libvirt-java community, and Daniel Veillard in particular, thanks!! I appreciate you tackling this, and getting it accomplished so rapidly.  I am sure that such responsiveness to folks who consume your work will continue to increase the number of folks who use it, and in the process you’ve made (at least) two projects better. 


New features for the upcoming Apache CloudStack (incubating) 4.0 release

CloudStack is actively working towards its first release at the Apache Software Foundation. While much of the work has been getting the tree into shape from a licensing and guideline perspective, many enterprising folks were also working on new features and functionality. Someone recently asked on list what the features were for the upcoming release – 4 people have answered and have different answers, so I figured it was time to aggregate all of that and get it into an easily consumable format  (my blog). All of these features have been discussed on the development mailing list, but cloudstack-dev tends to be a somewhat heavy volume mailing list, so it’s easy to miss things.

I will add one caveat – there may be things that I am missing, and I reserve the right to update the post.

  • Nicira NVP support
  • Ceph/RBD support for KVM
  • EC2/S3 API support integrated into CloudStack (formerly a separate package)
  • Support for Caringo as backend storage for object storage
  • Inter-VLAN Routing (aka VPC)
  • Site-to-Site VPN
  • Local Storage Support for Data Volumes
  • Tagging virtual resources
  • AWS API Changes for Tags

I honestly didn’t expect this level of new features for the first release, even my initial recounting of features was 1/2 the size of this list, so it seems it will be a feature-packed release.

That said, we are closing in on a release, and would love to welcome any testing or review. You can find some early test source builds at:

http://people.apache.org/~chipchilders/cloudstack/4.0/

You can get binaries to begin testing here:

http://jenkins.cloudstack.org/job/build-cloudstack-4.0-ubuntu12.04/

http://jenkins.cloudstack.org/job/build-cloudstack-4.0-rhel6.3/


CloudStack moves to Maven

CloudStack has historically used ant for most of its build needs. However, we kept all of the dependencies in binary form in the repository with the code. This was expedient but a bad habit. In the process of cleaning up that source tree we needed a tool that would handle dependencies in a sane manner. The short list of tools that we considered was:

We also considered an interim solution of continuing to use ant, but handling dependency resolution via an ant target with get tasks.

The reality of the situation is that folks had become comfortable with ant, it was relatively widely known. We knew of the other alternatives, but few actually possessed any experience with any of the tools. Hugo Trippaers built out gradle support for us to try, but indicated he was slightly more in favor of Maven. Darren Shepherd stepped up and built out maven support, indicating that he had used Maven in Godaddy’s implementation of CloudStack. Darren went one step better though; realizing how disruptive a build system change is, Darren’s first iteration leaves ant in place working as it has been – and adds maven support in parallel. This doesn’t necessarily adopt all of the ideal maven conventions, but moves us along, and Darren indicates that next release will more full embrace the maven way.

I know build plumbing isn’t as ‘sexy’ as some new features, but it impresses me to see new people emerging in the CloudStack community as we are incubating to take on big chunks of work to improve the project.

So want to take the new build tool for a spin? It’s easy – ensure you have maven installed (maven2 and maven3 both work, but the latter preferred) and run:

 mvn -s m2-settings.xml



A Runbook for CloudStack

Documentation is one of those vital things that any software, but especially open source software needs to be successful. When anyone can come along and download your software and use it, you suddenly have an incredibly diverse audience for not just your software but also your documentation. My experience with CloudStack have made that all the more evident to me. CloudStack, like many projects, requires an understanding of a number of different areas of practice – namely: virtualization, networking, Linux, and storage. The number of people who are experts in all of those areas is pretty small. Someone might be an ‘expert’ in 2 or 3 of those areas, but which one is always a mystery, then of course you have folks who are experts at all. So writing for a specific audience is at best tedious.

If you look at the existing documentation that exists, it has one other ‘flaw’, it documents almost everything. This really isn’t a flaw, you really do want documentation for everything, but if your target is new users of your software, documenting every esoteric feature they could possibly make use of is painful.

In #cloudstack on irc.freenode.net I kept hearing a common refrain, that it would take new folks 1-2 weeks to get CloudStack operating successfully, but once it was up, it ran great. The problem is that as a sysadmin, I know that being interrupt driven means that things that take a ton of time and effort are going to be dropped along the way, and perhaps never picked up again. While discussing this problem Chiradeep called my attention to RightScale’s runbooks; they don’t explore every option, instead they provide a prescriptive path to success for one niche way of doing things.

So I began to work on removing all of the choices – I picked the operating system, type of storage, network model, hypervisor, even the network addresses, and then documented the procedure for assembling those pieces into a CloudStack cloud. The first revision of which I’ve published here, with lots of help from Joe Brockmeier, Chiradeep, and Watnuss

http://people.apache.org/~ke4qqq/runbook/

A good chunk of this should be completely reusable – if people wanted to alter the network model or hypervisor, or any of the other choices, the base documentation should still be good, so feel free to fork it, or improve what is already there, you can find the source here:

https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=tree;f=docs/runbook;h=5833cb61df2f195f8dd4e4825d5c3c8b8fc7d8ba;hb=HEAD