John Ewart

Hello, Chef!

As of yesterday, I have said goodbye to my friends and colleagues at Amazon and begun my work for Chef! I learned a lot from my time at Amazon and met some amazing and bright folks but I'm looking forward to my new role at Chef and being able to spend some more time writing open source software.

I've been active in the Chef world for a number of years now as both a user, and an author; now I'll get to work on making it more awesome as part of my day job. Initially I'll be working on chef-metal, bringing our Docker support up to par with our other drivers, and I'm working on codifying some tools I've been using for years to build a profile manager for people who manage multiple Chef environments.

Mounting LVM partitions from Xen Server

After pulling some disks from a Xen Server that had LVM volume groups on them, I needed to mount them in order to pull the data off of them. The trick here is that Xen Server exposes the LVM logical volumes as raw disks to the guest so you need to probe the disk label and make it available to the system the disks are in.

Steps

  1. Scan for physical volumes with pvscan
  2. Scan for volume groups with vgscan
  3. Scan for logical volumes with lvscan
  4. List partitions in the correct logical volume with kpartx
  5. Make the partitions available to the system with kpartx
  6. Mount the partition

Using kpartx

List the partitions with kpartx

root@debian:~# kpartx -l /dev/VG_XenStorage-e5c11bff-0232-1f73-db15-14e42830fb1d/LV-6b056523-bb43-48f6-ac0e-f4bc69984355
VG_XenStorage--e5c11bff--0232--1f73--db15--14e42830fb1d-LV--6b056523--bb43--48f6--ac0e--f4bc69984355p1 : 0 6442448863 /dev/VG_XenStorage-e5c11bff-0232-1f73-db15-14e42830fb1d/LV-6b056523-bb43-48f6-ac0e-f4bc69984355 2048
List the partitions with kpartx

Add the partitions to the device mapper with kpartx so that they can be mounted somewhere:

root@debian:~# kpartx -a /dev/VG_XenStorage-e5c11bff-0232-1f73-db15-14e42830fb1d/LV-6b056523-bb43-48f6-ac0e-f4bc69984355
Attach partitions with kpartx

Now mount the partition. Note that in this case, each partition has the same path as the parent LVM logical volume but has a “pXXX” appended to the end, where XXX is the partition number.

In my case, ‘p1’ was appended:

mount /dev/mapper/VG_XenStorage--e5c11bff--0232--1f73--db15--14e42830fb1d-LV--6b056523--bb43--48f6--ac0e--f4bc69984355p1 /data
Mount newly attached partition

Capping Riak Memory Consumption

In a production environment, we were noticing that when storing or reading data in Riak, there would be periods of time where it would act as though it was being throttled by I/O wait time. This didn’t seem to make sense as each machine showed beam.smp ballooning to 100% memory usage and I/O wait was typically pretty low (<1%). It would seem that setting the eleveldb cache to a fixed size per partition lets you limit the amount of memory being used. I’m not 100% sure that the documented default of 8MB per partition is accurate, it seems that if you don’t specify a value, the default is “as big as it can be”. In this case Riak was eating up all physical memory and then cause other things to eat up swap which caused contention and therefore some pretty slow interactions.

Java's date formatter is not thread safe

As it turns out, Java’s date formatter is not thread safe, it uses internal variables to store the various bits of the date for formatting. My solution was to replace the single instance variable that was being shared with a date formatter factory that would generate an appropriate date formatter. Note that this is only one of many solutions, and not neccessarily the most efficient one.

Rackspace Cloud load balancers and WebSockets

Recently I needed to configure a Rackspace Cloud load balancer to support WebSockets. Initially I tried TCP (which seemed to be a logical choice) but that resulted in dropped connections. Even though I didn’t expect it to work, I tried HTTP as WebSockets is effectively HTTP with a connection upgrade, but the conversation would stop after the Connection-Upgrade header was sent. Hopefully this will be of some use to someone else as well.

After some digging, it turns out that the trick is to use the TCP_CLIENT_FIRST protocol which expects that the client be the first one to pass packets to the server (for example HTTP GET requests). The documentation on these options is actually stored with the RackSpace API Developer docs.

The downside to this is that you can’t use any of the Layer 7 monitoring / verification (i.e checking HTTP status codes), but it will work just fine for mapping requests. I combined this with Nginx’s WebSocket proxy support and everything works smoothly.

Sphinx, MacTeX and Macports

If you want to use Sphinx to build PDF documentation using MacTeX built from Macports, you will need to install the following ports (note that this may not be a minimal list, but it is a functional list):

  • texlive-latex
  • texlive-latex-recommended
  • texlive-fonts-recommended
  • texlive-fonts-extra
  • texlive-latex-extra

I’m not sure if the -extra packages are needed, but they don’t take up much space so I went ahead and installed them.

Converting between Proj4 and WKT with Python

I learned that the OSGEO Python module has a nice way to convert between WKT representations and Proj4. I have some LiDAR data from the [CA DWR][dwr] of the Central Valley of California and it’s been recorded using a non-standard projection (essentially UTM10 in feet instead of meters). I had created a proj4 string that I’ve been using with Python but needed it in WKT format to use with geotools in a Java program. I found some examples [here][proj4conversionsource] and have reproduced them here for convenience.

#!/bin/env python

import os
import sys
import string
import osgeo.osr

if (len(sys.argv) <> 2):
        print 'Usage: wkt2proj.py [WKT Projection Text]'
else:
        srs = osgeo.osr.SpatialReference()
        srs.ImportFromWkt(sys.argv[1])
        print srs.ExportToProj4()
Convert WKT to Proj4
#!/bin/env python

import os
import sys
import string
import osgeo.osr

if (len(sys.argv) <> 2):
        print 'Usage: proj2wkt.py [Proj4 Projection Text]'
else:
        srs = osgeo.osr.SpatialReference()
        srs.ImportFromProj4(sys.argv[1])
        print srs.ExportToWkt()
Convert Proj4 to WKT

http://wwwdwr.water.ca.gov [proj4conversionsource]: http://spatialnotes.blogspot.com/2010/11/converting-wkt-projection-info-to-proj4.html

Creating a custom CRS with GeoTools

I needed to transform some spatial data into a custom CRS (a previously mentioned reference system from the CA DWR) in some Java code I’m writing to compute elevation profiles of river channels. I only had the CRS in proj4 format which, as far as I’m aware, is not supported directly by GeoTools. With a little help from some Python code, I was able to convert my proj4 definition, which looks like this:

+proj=utm +zone=10 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=ft +no_defs
Custom Proj4 definition

into its corresponding WKT format:

PROJCS[ "UTM Zone 10, Northern Hemisphere",
  GEOGCS["GRS 1980(IUGG, 1980)",
    DATUM["unknown",
       SPHEROID["GRS80",6378137,298.257222101],
       TOWGS84[0,0,0,0,0,0,0]
    ],
    PRIMEM["Greenwich",0],
    UNIT["degree",0.0174532925199433]
  ],
  PROJECTION["Transverse_Mercator"],
  PARAMETER["latitude_of_origin",0],
  PARAMETER["central_meridian",-123],
  PARAMETER["scale_factor",0.9996],
  PARAMETER["false_easting",1640419.947506562],
  PARAMETER["false_northing",0],
  UNIT["Foot (International)",0.3048]
]
Custom WKT

Given this WKT format, you can create a custom CRS using the GeoTools CRSFactory class. After some hunting around, I discovered that you can use it like such:

String customWKT = "PROJCS[ \"UTM Zone 10, Northern Hemisphere\",\n" +
                    "  GEOGCS[\"GRS 1980(IUGG, 1980)\",\n" +
                    "    DATUM[\"unknown\"," +
                    "       SPHEROID[\"GRS80\",6378137,298.257222101]," +
                    "       TOWGS84[0,0,0,0,0,0,0]" +
                    "    ],\n" +
                    "    PRIMEM[\"Greenwich\",0],\n" +
                    "    UNIT[\"degree\",0.0174532925199433]\n" +
                    "  ],\n" +
                    "  PROJECTION[\"Transverse_Mercator\"],\n" +
                    "  PARAMETER[\"latitude_of_origin\",0],\n" +
                    "  PARAMETER[\"central_meridian\",-123],\n" +
                    "  PARAMETER[\"scale_factor\",0.9996],\n" +
                    "  PARAMETER[\"false_easting\",1640419.947506562],\n" +
                    "  PARAMETER[\"false_northing\",0],\n" +
                    "  UNIT[\"Foot (International)\",0.3048]\n" +
                    "]";
try {
  CRSFactory factory = ReferencingFactoryFinder.getCRSFactory(null);
  CoordinateReferenceSystem customCRS = factory.createFromWKT(customWKT);
} catch (FactoryException e) {
  e.printStackTrace(); 
}
GeoTools CRSFactory Example

Which will create a custom coordinate reference system without having to replace the epsg database with a properties file that contains all the default EPSG codes with your own added to it (which seemed like too much work for this use case.) Then you can use it anywhere you want like you’d use any other CRS object.

USGS Elevation Data with Java

Grabbing elevation data from the USGS webservice is pretty straightforward – make an HTTP request to http://gisdata.usgs.gov/xmlwebservices2/elevation_service.asmx/getElevation with the required parameters (X and Y coordinates, elevation units and the data source), and you get back some XML with the results.

In my case, my area of interest is in the US so the source layer is NED.CONUS_NED which is the National Elevation Dataset Contiguous U.S. 1 arc second elevation data. X and Y are provided in lon/lat format and in my case, I want the data back in feet. Another thing to note is the presence of the parameter “Elevation_Only” which is required when querying for elevation data.

String dataUrl = "http://gisdata.usgs.gov/xmlwebservices2/elevation_service.asmx" +
                 "/getElevation?X_Value=" + x + 
                 "&Y_Value=" + y + 
                 "&Elevation_Only=TRUE" +
                 "&Elevation_Units=FEET" +
                 "&Source_Layer=NED.CONUS_NED";

DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();
Document doc = docBuilder.parse(dataUrl);

// normalize text representation
doc.getDocumentElement().normalize();
// The results are contained in a single <double> node
NodeList listOfDoubles = doc.getElementsByTagName("double");

if(listOfDoubles.getLength() > 0)
{
    Node elevationNode = listOfDoubles.item(0);
    Element elevationElement = (Element)elevationNode;
    Double elevation = Double.parseDouble(elevationElement.getFirstChild().getNodeValue());
    return elevation;
}
Grabbing USGS Elevation Data
<?xml version="1.0" encoding="utf-8"?>
<double>212.110809766714</double>
XML response

Migrating your Chef Server

Today I had to move a Chef server from an Ubuntu 12.04 machine to a Debian 6 box. The simplest path, which worked for me was to:

  • Setup chef on the new machine
  • Shutdown the services on the old host
  • Compress and move the contents of /etc/chef and /var/lib/chef on the old server
  • Shutdown services on the target system
  • Uncompress the tarball on the new server
  • Export the contents of the chef CouchDB database on the old system
    • Use the CouchDB Python package (installable via pip / easy_install as 'couchdb')
    • Dump the data with couchdb-dump http://127.0.0.1:5984/chef > /tmp/chef.json
  • Copy the json file to the new host and import it by:
    • Creating the chef database with curl -X PUT http://127.0.0.1:5984/chef
    • Importing the JSON via couchdb-load --input chef.json http://localhost:5984/chef
  • Restarting the services on the new host
  • Telling Chef to rebuild its indices (otherwise Solr won't know about the existing data in CouchDB) via knife index rebuild