5 min read

Thoughts about Fog.io

Fog billed as “the cloud services library.” This implies, at least to me, that the library provides a consistent interface for interacting with cloud services such as AWS, DigitalOcean, Rackspace Cloud, and so on. And by consistent interface what I mean is “the STL of cloud libraries”; in such a way that I can (through very few changes, if any) replace AWS with DigitalOcean with Joyent, and my code will remain as change-free as possible. I concede that this is not always possible, that building abstractions comes with the implicit cost of losing specificity, and that some technologies do not abstract well. Also note that, while this post focuses on fog.io as an example, it is not the only villain out there; Ruby itself does not lend well to enforceable structure in code.

What I expected

I would have expected it to provide a common facade on top of the provider-specific libraries. By this I mean that the above example of key pairs would work and use the underlying AWS-SDK , or Joyent libraries, whichever, instead of re-inventing the wheel and implementing all of the AWS API calls itself. I would expect that there be a main module named ‘fog’ which provides the interface, and then sub-modules such as fog-aws that required the proper underlying provider library. That being said, it seems as though there are some providers that operate in that fashion (fog-softlayer, and fog-brightbox being among them).

What I found

What I got was a bunch of loosely-related libraries that provide their own implementations that are subtly different from one another (in such as way as to be genuinely confusing as to how to implement the right logic). The libraries replace provider-specific terminology with their own terminology regarding states and various other messaging from the provider (which, by the way, is possibly worse than the poor interface facade because it confuses signals that are taken from documentation and user interfaces that people expect).

I’ve noticed this lack of consistency as I work on things like chef-metal-fog, which rely on Fog for their underlying API calls. Here we will look at fog’s logic for fetching the public key pairs from both Digital Ocean and Joyent. Note that the code I am putting forth was taken from a pull request and modified slightly for readability; as such I lay no claim to it being the most idiomatic example. (I would argue, however, that even if there is a better way, this shouldn’t even be possible). Let’s take a look at some examples of where this happens, in this source file.

when 'DigitalOcean'
current_key_pair = compute.ssh_keys.select { |key|
  key.name == new_resource.name }.first

if current_key_pair
  @current_fingerprint = current_key_pair ?
    compute.ssh_keys.get(current_key_pair.id).ssh_pub_key : nil
  current_resource.action :delete

Here is the analogous example when using the Joyent cloud. Notice that it looks surprisingly different.

when 'Joyent'
current_key_pair = begin
rescue Fog::Compute::Joyent::Errors::NotFound
if current_key_pair
  @current_id = current_key_pair.name
  @current_fingerprint = if current_key_pair.respond_to?(:fingerprint)
  elsif current_key_pair.respond_to?(:key)
    public_key, format = Cheffish::KeyFormatter.decode(current_key_pair.key)

In order for me to want to use fog, it would need to implement an actual interface and strictly require that all of the adapters adhere to the same interface. For example, this whole block should be replaced with one, simple, statement like the following:

@current_fingerprint = begin
                      rescue Fog::Compute::Errors::NotFound

Instead, we have not only two different implementations that achieve the same goal, but interface objects (compute, in this case) that should have the exact same set of methods ambiguously have two completely different ways of accessing the same data across providers. In the case of Joyent, we access the public keys via compute.keys where DigitalOcean uses compute.ssh_keys, and to top it all off, the objects returned by Joyent respond to methods like fingerprint and key where the objects created by DigitalOcean respond to ssh_pub_key to get at the same bits of data.

Using provider-specific libraries

At this point you would be better off using two completely different libraries, one for each specific cloud platform, because you are not getting much benefit out of fog. In fact, I would argue that you are actually reducing code quality in this case. Instead of implementing small, modular, and easy to read implementations, you now have a huge mess of conditionals all over the place. Because you will only be exercising any one of them across all of the conditional checks (i.e when you are using Joyent as your provider, every case of this will be using the Joyent code) so you would be better off just moving that logic into its own module instead, keeping the code easier to read and making the mental model much simpler because each module is focused only on that specific provider leading to shorter methods if nothing else.

Moving away from fog.io

For this reason, I have begun building a pure AWS driver for chef-metal instead of focusing on the fog driver beyond maintenance. The un-needed abstraction from fog has not only made interfacing with the various providers more complicated than it needs to be, it also means that we’re not always using the provided API clients such as the AWS SDK.Additionally, moving to the AWS SDK has allowed us to build AWS-specific primitives for metal that either wouldn’t exist or would be much harder to map to when using fog such as SQS, SNS and so on. We also get the added benefit of using code that comes from the folks at AWS and is therefore likely to be up-to-date and have better support from AWS than the fog.io implementation.