Pony Factor Math

As one commenter correctly pointed out in my blog post on risk, I hadn’t provided any information on how Pony Factor was calculated. To that end, the below has a explanation, written by Daniel Gruno. Many thanks for him giving me permission to use his work here:

Measuring the diversity of projects using equine mathematics

In order to to a deeper analysis of how ASF projects fare compared to other FOSS projects out there, we created a term we have coined “Pony Factor” (because ASF is full of ponies, or people who think they are ponies).

 

In laymans terms, Pony Factor (PF) shows the diversity of a project in terms of the division of labor among committers in a project. The higher the PF, the more resilient the project is towards one or more contributors leaving or taking a vacation from a project.

In more mathematical terms, PF is determined as; “The lowest number of committers whose total contribution constitutes the majority of the codebase”. The specific formula we use in this write-up is called Augmented Pony Factor (APF) and takes into account whether or not a committer is still active or not. If not, then these contributions are ignored when determining the overall score of a project.

 

Pony Factor can be written as:

formula

Where P is the Pony Factor, Cn is the number of commits by committer number n when sorting descending by number of commits, K is the percentage of the codebase we are looking for, and V is the total commit volume.

 

Let’s take an example: Apache Foo has 30 active committers. Bob, who is still an active committer, has contributed with 8% of the code. Bill, who originally wrote Foo, hasn’t been active for 4 years, but wrote 32% of the overall codebase. Jane, who joined 2 years ago, has written 12% of the codebase, and Rich, Ellen and Joe have each written 10% of the codebase. The rest of the codebase is written by 24 other people. Thus, when ignoring Bill’s contribution, Bob, Jane, Rich, Ellen and Joe have written 50% of the codebase that actively involved contributors have made, and the APF is 5:

 


Risk of the Commons

I’ve been reading and thinking recently about the potential for a tragedy of the commons to be a reality in the realm of open source. The tragedy of the commons is a economic theory that says that individuals behave contrary to the self-interest of everyone by depleting a common resource. One needs to look no further than the environment for examples of this behavior.

Free and Open Source Software

Free and Open Source software has revolutionized how the world consumes software. Linux, BSD, httpd, nginx, MySQL, PostgreSQL, and thousands of other software products are consumed voraciously. But almost universally people are only consuming. And generally that’s okay.  Sharing is one of the key tenets and strengths – that we are able to freely share code to help our neighbor.

Of course we can’t deny that it has a downside? We’ve of course seen the outrage and shock following the Heartbleed vulnerability that only 4 people were watching and contributing to one of the most widely used cryptography libraries in the world.

This past week, the coverage of the GPG suite of tools being basically unfunded broke. Unfortunately it took that level of shaming for folks to realize that it was important and supply some money to fund a developer or two to work on the project. Of course I am as guilty as anyone – I’ve used GPG for years to protect private communications, to sign software releases, and a few other purposes. And before that I’ve used OpenSSL for years to protect web traffic. However, until this past week, I hadn’t contributed a cent directly to any of those efforts.

Technical Debt

The issue though is really larger than just security or a few ‘critical infrastructure’ projects. The reality is that we, and especially businesses, are incurring an odd kind of technical debt for every piece of free and open source software that we are using. If we aren’t actively contributing to a project, we are hoping that others will. We are putting our trust in the fact that someone will find it valuable enough to contribute, even when we don’t.

Recently, one of the members of the Apache Software Foundation’s infrastructure staff, Daniel Gruno, did some research into open source project health. He invented a humorous name, termed the ‘Pony Factor’ to represent the lowest number of contributors to a codebase to contribute 50% of the codebase. And then realizing that people come and go from open source projects, he developed an ‘Augmented Pony Factor’ calculation that takes into account only active developers.

He started by looking at projects at the Apache Software Foundation. The below graphic shows the lowest number of individual contributors who contributed to an Apache Project’s codebase. Obviously that graphic only lists a few of them. If you want to see all of the current projects from the Apache Software Foundation, you can see that graph here.

ASFprojects

But, in a vacuum that’s hard to know if those numbers are good or not, so Daniel went beyond that and looked at a handful of popular free and open source projects:

externalprojects

I was shocked by seeing some of those statistics. Do I feel comfortable that only three people are actively contributing 50% of the current code contributions to the blog software this post appears on? Or that one person writes most of my preferred version control platform. Am I willing to trust my business to that? And of course we know that state security agencies have NEVER asked free and open source developers to compromise security.

How do we avert a tragedy?

The first issue is that we need to be aware of the risk. Many of us see the incredible platforms out there and simply trust that the people who wrote them knew best and are going to continue to provide us with great software. As one of my favorite authors quipped in his novel, TANSTAAFL.

“Gospodin,” he said presently, “you used an odd word earlier–odd to me, I mean…”

“Oh, ‘tanstaafl.’ Means ~There ain’t no such thing as a free lunch.’ And isn’t,” I added, pointing to a FREE LUNCH sign across room, “or these drinks would cost half as much. Was reminding her that anything free costs twice as much in long run or turns out worthless.”

“An interesting philosophy.”

“Not philosophy, fact. One way or other, what you get, you pay for.”

I am not advocating for paying for every piece of software or needing to contribute to every open source project in existence, but I’ll leave you with this question. What are you doing to avert a tragedy of the free and open source commons? And if you aren’t, then who will?


CloudStack system VM architecture

I had a recent discussion with some folks wondering why there was now an option for 32 or 64-bit System VMs with CloudStack 4.3. I provided an answer, and linked back to some mailing list discussions. I figured this might be of general interest, so I’d document in the short term with a blog post.

For background, system VMs provide services like dealing with snapshots and image templates, providing network services like load balancing, or proxying console access to virtual machines. They’ve historically been 32-bit. The reason for this is that the 32-bit arch has been very efficient with memory usage, and since these are horizontally scalable it’s easy to just spin up another. 

But you can have either – which do you pick?

Depending on the workload you might have a different answer. Some hypervisors work better with one arch over the other; and that might be a factor; but ignoring hypervisors lets examine the reason you’d want to use either.  32-bit:  32-bit operating systems are pretty efficient with their use of memory compared to 64-bit. (e.g. the same information typically occupies less space in memory). However there are limits on memory. (Yes, you could use PAE with a 32-bit kernel to get more addressable memory, but there is considerable CPU overhead to do so – which makes it inefficient given that all of this is virtualized) The 32-bit kernels also have a limit on how much memory is used by the kernel. This is really where the use case of 64-bit System VMs evolved from. Because one of the system VM functions is providing load balancing, the conntrack kernel module had a practical limit of ~2.5M connections – and that left precious little room for the kernel to do other things. CloudStack orchestrates HAProxy as the default virtual LB, which in turn uses conntrack. Having a heavily trafficked web property behind CloudStack’s 32-bit virtual load balancer might run into that limitation. 

64-bit: Not nearly as efficient with memory usage; however it can address more of it. You’ll actually tend to need more memory for the same level of functionality; but if you need to push the envelope further than a 32-bit machine, then at least you have an option to do so.

In short, you should probably default to 32-bit system VMs unless you envision that you might need to take advantage of the benefits a 64-bit system VM provides you.  

If you want to see the original discussion around this topic from the dev@cloudstack.apache.org mailing list, look here: 
http://markmail.org/message/i5kolazi5so52eon


Getting ready for the CloudStack Collaboration Conference Europe

I’ve been in Amsterdam two days thus far in preparation for the CloudStack Collaboration Conference. I had planned to come in and show up at the the Schuberg Philis offices, and help with getting things ready for the conference. There hasn’t been a lot for me to do though, I looked at some shiny demo racks, I helped load a monitor, and that’s about it. The folks at SBP are really very squared away in terms of conference preparation and are doing very well.

This has given me time to focus on recovering from jet lag, and getting to spend time talking with folks. I also have already started meeting folks I only know via email. Some of the pre-conference discussions are intriguing. But this is all pre-conference – tomorrow things actually start – and they start with a hackathon. The proposed hacking sessions actually leave me with multiple things that I want to work on. Top of mind is:

  • Docs – fixing 4.2.1 release notes and working on 4.3 and beyond.
  • Gluster – getting CloudStack to consume GlusterFS natively.
  • KVM Agent refactoring

We also have the space til late in the night, so this won’t be a hack for a few hours and then disappear, we can keep working well into the evening.

One of the things that I didn’t like about our last in-person hackfest, was the lack of a feedback cycle. So I want to try and encourage folks holding the hackfest sessions to report what they worked on and what actually got accomplished at least to the mailing list, but hopefully also to a blog.

Should be a fun day.


My thoughts on Apache CloudStack’s graduation

Today was a great milestone for Apache CloudStack. If you haven’t seen the news (but yet somehow have come across this blog post???) the Apache Software Foundation announced that CloudStack had graduated from the Incubator as a top-level project. While in many ways it marks the end of a number of personal and project goals, it’s also just another milepost along the journey.

I’ve been working on CloudStack since most folks knew it as ‘Cloud.com’, and it’s amazing to see the difference over the space of a couple of years. I’ve been involved with several open source projects for a number of years, and I knew of the ASF by reputation, but had no personal experience. I spent weeks reading the documentation on the website when we first began discussing the potential for a move to the ASF, and I rapidly became both impressed and afraid. Impressed, because I saw codified in front of me the most transparent and open expectations of anything I’d been involved in. I suddenly appreciated why the ASF had the reputation it did. Afraid because the magnitude of change was incredible. The transformation hasn’t been perfectly smooth, I even questioned if the dramatic change would be so much as to be overly disruptive. Many folks have written of how dramatic it is to open source a project – but to take a project that was open source but still heavily commercially governed and move that to the Apache Software Foundation, is both extremes of the open source spectrum.

Many people will write about the tremendous growth in community numbers, the more interesting story to me is the tremendous growth of community responsibility, over 50% of the Project Management Committee don’t work for Citrix. Of course committers and PMC members are expected to behave as individuals, and in the best of interest of the project, but that amount of diversity in a short time is impressive. I also find it fascinating just how many folks who are participating are CloudStack users; they have truly taken ownership and responsibility for their IaaS platform. What does graduation mean from my perspective? A number of different things, but most poignantly, it means that we have met the expectations of our mentors, the Incubator, and the Apache Software Foundation. I am excited about the future. I have no delusions that for many, or even most of our users, our graduation has little or no immediate impact. It does have an impact for the project, as we shift our focus forward, and I think that will tremendously benefit our users. I think that you’ll continue to see impressive things from Apache CloudStack, we’re really only getting started.

A couple of words of thanks to folks

I can’t express how wonderful our mentors were. They understood the process, they saw the challenges, they stepped in where appropriate and let us find the solution when we needed to. I’ve walked away thoroughly impressed at both the individuals as well as the incubation process in general, I am sure it can be improved, but struggle to think of anyone doing it better.

The folks at Apache Infrastructure – they are doing an incredible and impossible job – supporting well over 100 top level projects including such behemoths as Hadoop, Maven, and now CloudStack; dealing with plenty of inbound incubator projects, some of which, like CloudStack, have years of history, and thus plenty of baggage to bring with them. Thanks for the immense amount of help you folks have provided.


Upgrading CloudStack from 4.0.0-incubating to 4.0.1-incubating

I spent my weekend upgrading my cloud. Yes my CloudStack IaaS cloud isn’t going to threaten AWS EC2  anytime soon, but it still serves a vital purpose for me, giving me a place to provision services, provide a test bed, and many other things.

What does my personal cloud look like? It’s a small cloud of 6 physical nodes (Dell R210s) based in a datacenter somewhere in San Jose. One of the 6 nodes is a CloudStack management server running CentOS 6.3 and KVM running on the same node, and three more KVM nodes. I also have one XenServer and one XCP node.  That said, the principles behind upgrading remained the same regardless of the machine.

So why did I upgrade? Well Apache CloudStack 4.0.1-incubating, was announced today (and released late last week) What’s the difference between 4.0.0-incubating and 4.0.1-incubating? Well, there are no new features, but there are a number of bugfixes included.

CloudStack versions now adhere to Semantic Versioning, which means this is merely a maintenance update or a bugfix update. Bugfix releases are great for production deployments – the change is minimal, and it’s all focused on fixing problems. Within open source projects though, bugfix releases tend to be unsexy. Many communities skip them entirely because developers want to work on new things and don’t work on keeping up with the massive investment that is backport fixes. It is, I think, a reflection of project maturity and operator-driven community of CloudStack that a bugfix release is already out and another is on the way (4.0.2 planning emails started flowing even before 4.0.1 was announced.) Yes many developers are looking at the next feature release (4.1.0) and even the one past that (4.2.0) but that doesn’t diminish the importance of the 4.0.x install base, or the expectations that those folks have – many folks, especially large cloud deployments aren’t going to want to perform a feature upgrade every few months, but can tolerate the smaller bugfix releases.

But enough about that – if you have 4.0.0 deployed you are interested in how the upgrade went aren’t you? So I have confession to make. First, I didn’t completely follow the upgrade directions. You should. The management server upgrade was painless – I ran:

# yum -y update cloud-*

followed by

# service cloud-management restart

And all was well.

Next was my KVM agents. I shelled into these as well (yes, I know, I am awful for not leveraging configuration management, especially when I am such a CM advocate), but my environment is where I test installation instructions, and most (but not all) of the upgrade instructions) I didn’t bother to stop the agents, or even to put them into maintenance mode. I ran

# yum -y update

to get all of the operating system upgrades as well as the CloudStack upgrade, and then:

# service cloud-agent restart

and my agents reconnected to the management server without a hitch. I tested a few deploys and destroys and everything worked well. Zero VM downtime for any of my running VMs.

Again, you should follow the directions, make DB backups, etc, but you shouldn’t expect any issues on the 4.0.0->4.0.1 upgrade path.

(Note: I first published this at BuildACloud.org)

Book Review: The Phoenix Project

I’ve been hearing about the Gene Kim and company working on “The Phoenix Project” for almost a year now, most frequently at places like DevOps Days.

Knowing it was modeled after Eli GoldRatt‘s “The Goal” and how it was essentially a business novel, in which Goldratt uses a story of the troubles of a manufacturing manager to lay out the tenets of Constraint Theory. I first read The Goal more than a dozen years ago, and it wasn’t necessarily the finest piece of literature, but it was a compelling book, and the book and corresponding theory has become a thing of legend.

Fast forward several years and one of my friends and DevOps guru John Willis starts talking “The Goal” in the context of DevOps, and then Ben Rockwood points back further to folks like Edward Deming. So trying to imitate the effect of books like “The Goal” is a very lofty aspiration. I honestly didn’t think that it could be done, particularly if driven around IT.

But then Tuesday, I hear via email and twitter, that the book has finally been released. I was travelling on Tuesday and ordered the e-book that afternoon so that I could perhaps finish it by the end of the week. I started reading it during dinner, and by the time I retreated to my room I was already hooked. The book was captivating. I saw myself at various stages of my life in several of the characters, I could empathize with the problems. I couldn’t stop reading, and actually finished the book that evening.

The basic synopsis of the book is that Bill gets a sudden promotion to VP of IT after two of his superiors are dismissed, and is immediately thrown into the fire with business threatening outages and a vastly overbudget and behind schedule project that is supposed to ‘save the company’. He struggles to deal with the deluge of ongoing problems and get the project back on track, while at the same time political battles rage around him. With the help of Erik, a prospective board member and ‘IT guru’ he dramatically changes the culture and method of operation for both operations and development, and saves the company.

So my thoughts – I think that the book will likely have the same effects as “The Goal” but with a focus on the IT business – or perhaps pushing folks to realize that IT really isn’t so different from manufacturing. I think it should also inspire folks to adopt the DevOps culture.
That said, I don’t expect the kind of timelines that Bill achieved to be normal, and especially without folks like the mentor Erik who has the respect and trust of the executives to shield Bill and advocate for his changes. That said, it is still a tremendous book, and I know Gene Kim and others hope that it will usher in the DevOps revolution to the masses, and I think there is a chance that it will happen.

In short, if you are remotely related to IT, or have any curiosity about DevOps you should be reading this book as soon as possible. I noted that yesterday Gene sent out an email that said that the book was #1 in the Kindle business management section on Amazon, outpacing books like “The Lean Startup” and “Good to Great”

Viva la DevOps Revolution!


CloudStack 4.0.0 – released

Moving to the ASF has been an interesting process and quite a whirlwind. In addition to all of shuffling of infrastructure to the ASF, migration of codebases and licensing there’s been an amazing influx of participation in the project.

Of course in addition to all of this shuffling of resources – we still have the business of running the project, producing software, and hopefully getting a release ready. Of course in getting a release ready there are plenty of new policies and guidelines to be complied with; and CloudStack is a pretty established and large codebase, so plenty of things to audit and consider and a few things needing actual change. We finally (as you hopefully have heard) had a release that met those standards and was approved by both the community and the IPMC. This is noteworthy in and of itself, but of course why many were working on getting the codebase into shape, others continued to work on their desired features – and it astounds me at the number of things that happened concurrently:

Of course this isn’t an exhaustive list – you can see that in the Release Notes.

  • Inter-VLAN routing (aka VPC)
  • Site-to-site VPN
  • Local storage for data volumes
  • Nicira NVP SDN support
  • Object storage via Caringo
  • CLVM support (returns)
  • Ceph (RBD) support for KVM

Of course – you know you want all of this goodness –  and fortunately the 4.0.0-incubating release is available you can download it and begin using it today.


Secondary storage scalability configuration and tuning in CloudStack

Someone asked about this recently on the cloudstack-users mailing list, and I figure this is both an interesting design point of CloudStack as well as a specific piece of knowledge that others will find useful.

So first, lets talk about what Secondary Storage is – especially if you are not familiar with CloudStack’s hierarchy. Secondary storage is where templates (aka non-running disk images) and snapshots are stored. The  Secondary Storage VM (SSVM) handles a number of things, most importantly shuttling images and snapshots between various primary storage (where running disk images are stored) resources and the secondary storage resources. The SSVM also handles receiving uploaded disk images and aging/scavenging snapshots. The SSVM is really just some CloudStack agent software running atop a Linux VM, but bears the name SSVM because of the functionality that it is providing.

So why run this storage management service  as a stateless virtual machine instead of incorporating it into some centralized all-in-one management server? First your management server might not be network-proximate to your secondary storage resource. Particularly if you have multiple geographically disparate datacenters with resources being managed in just a few locations. Secondly, all-in-one isn’t very scalable. With the stateless ‘worker VMs’ CloudStack can dynamically scale the number of VMs up to deal with load. The minimum is that a single SSVM will exist in each zone. CloudStack uses two different configuration options to determine when to scale up and add more:

  • secstorage.capacity.standby
  • secstorage.session.max

secstorage.capacity.standby is the minimal number of command execution sessions that system is able to serve immediately. secstorage.session.max is the max number of command execution sessions that a SSVM can handle. By manipulating these settings (making the former higher and latter lower) you can trigger a faster automatic scale at lower load, or you can let the defaults do their job.

Of course that’s the new-fangled cloud way of doing things – just add more workers to do the work, but that’s not the only way of tuning or making your SSVM more scalable.

First, you can just make the SSVM bigger. The default service offering is for a 500Mhz, single vCPU, 256MB of RAM machine. That is pretty small, and works well for environments where change isn’t high, snapshots are infrequent, and the environment itself is small. You can of course edit that service offering or define a new one to be beefier. If you define a new offering you need to configure that with the secstorage.service.offering parameter.  Not only can you scale out, but you can also scale up.

Of course, nothing really trumps architecting things right in the first place. It’s pretty trivial to throw up a cloud computing environment with a single NIC on every host and letting management, guest, public, and storage traffic traverse a single physical network. And there isn’t inherently anything wrong with that, but if you really need to maximize efficiency, you can always have a dedicated storage network, and even better, enable jumbo frames on your networking hardware as well as your hypervisors, storage hardware – and yes your SSVM as well – you can configure that on the SSVM with the secstorage.vm.mtu.size parameter.

Even if you don’t have intense levels of snapshot work and template work, you may still want to tune your SSVM to make it more time-efficient.


Even more transitions – github

CloudStack has maintained some sort of presence on github for some. In its earliest days this was merely a mirror of the primary git server at cloud.com. Gradually more and more code made its way to the github account, including code for the knife-cloudstack plugin, and a number of other interesting tools. Back in April CloudStack joined the ASF’s Incubator, and thus lots of resources need to transition to the ASF’s infrastructure. The existence of a CloudStack github account, particularly with a repo for the original CloudStack code also created some confusion as to where one was to get the canonical source for CloudStack code. Today that github account name has changed  from CloudStack to CloudStack-extras (in the spirit of apache-extras.) – and there are disclaimers and pointers to the new home of the project. The original copy of the CloudStack codebase remains. (It’s important to realize that CloudStack was originally GPL, and thus keeping the source available remains a responsibility, which is further complicated since only the two latest of the 195 branches were moved to the ASF.)

Additionally the primary server that used to house CloudStack code (git.cloud.com) has been retired.

The canonical source code repo for CloudStack is (and has been for sometime):
https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git

Of course we aren’t done migrating all of the resources, more of that coming soon.