Like many other local government geospatial types, a good deal of my recent time has been devoted to obtaining, processing, and presenting 2010 Census Redistricting Data for my jurisdiction. If it’s possible for an experience to be both vexing and rewarding, this has been a textbook case.

Others have detailed how to obtain Census data and the often fustrating experience of simply trying to find relevant data, but let’s face it, getting Census data is just plain complicated and convoluted. After experiencing frustrations of my own and uttering a number of Munchian screams among other things, I finally got the data I was looking for. Questions from the public and elected officials served as a clarion call for getting the data into a format that was much easier for potential users to digest. Even more importantly, with an eye toward redistricting, appropriate data needed to be mapped and visualized. This was yet again a job for a simple web mapping application.

My vision for an application was one that allowed users to compare 2000 and 2010 figures for every level of Census geography in my county, from blocks on up to the entire county. I eventually came to realize that at the block level, decennial comparisons are often not worth the paper they’re written on because of shifting block boundaries. This was especially problematic in parts of the county where significant levels of growth had occurred since 2000. After coming to that realization, I looked at the next level up, block groups, and found that while boundaries had shifted, the amount of change wasn’t drastic. The same was with true with tracts. Changed boundaries weren’t a concern with the remaining levels of geography since they were all tied to political subdivisions.

Now that I had my eye on the (moving) goalpost, the question was how to marry the 2000 data with the 2010 boundaries. Certainly this could be done using ArcGIS geoprocessing tools, but I also wanted to test some different approaches while being mindful of data accuracy and consistency. Enter SpatiaLite. Pounding out a little SQL and seeing your results within seconds versus running a collection of GUI-based tools or a model, firing up ArcMap, and then loading the resultant data set, was a big timesaver.

After testing a number of different approaches in SpatiaLite, I opted to use block data as the basis for my block group and tract data sets. My approach consisted of nothing more than a series of spatial joins. However, because of the changes in boundaries, I couldn’t just take 2000 block data and cram it into 2010 block groups and tracts. After visually inspecting areas where block group and tract boundaries had changed, it appeared that spatial joins between the centroids of the 2000 blocks and the 2010 geographies’ polygons would produce reasonably accurate 2000-2010 comparisons.

Five or six lines of pretty ordinary SQL in SpatiaLite, most of it just renaming fields, and mere seconds were all it took per dataset created. I can only imagine how many tools and time would have been required to achieve the same result with ArcGIS. Although I’ve used SpatiaLite and its cousin PostGIS to perform queries on numerous occasions, I was once again reminded that opening a GUI and clicking buttons often becomes a process that occurs with little thought given to the hamster running under the hood. Relationships between the datasets being processed are often not considered. It’s good to peel back the lid and see a bit more of the whole picture, not to mention simply having more control over geoprocessing operations by tweaking queries.

Now that I had my data, the question was what platform to use to get it on the web. Feeling that I needed to get more experience with the ArcGIS Server JavaScript API, that was my choice. This was my first fairly significant foray into it. Having OpenLayers experience was helpful, but I had to hit my head on my desk and splash cold water on my face on several occasions to snap out of an OpenLayers mindset. I exported my geometry tables out of SpatiaLite as shapefiles and then loaded them into a file geodatabase for my ArcGIS Server service.

The result:

Some lessons were learned, or in some cases, reinforced:

  • A hybrid proprietary-open source model is awfully compelling. Even Esri thinks so.
  • Shapefiles…shapefiles…shapefiles…sigh…why are we still dealing with them and their limitations in the second decade of the 21st century? Granted, I opted for an approach where I had to use them, but still…
  • The old mantra of loading everything into ArcSDE continues to ring ever more hollow for me. For small datasets, local file geodatabases clearly perform well in ArcGIS Server.
  • Caching isn’t everything. In this case, the time and resources involved in creating and maintaining a cache weren’t worth what would likely be a minimal boost in performance.
  • Google Chart Tools provide an absurdly simple way to deploy whiz-bang interactive charts in your JavaScript application.
  • Having some jQuery experience, I didn’t know what to make of Dojo at first. I warmed up to it, but jQuery would still be my first choice.
  • The JS API is a fine option when you’re working with ArcGIS Server, but it strikes me as missing a level of flexibility offered by OpenLayers. There were a few OpenLayers properties and functions I would have liked to have used, but I found no obvious equivalents in the JS API. They were not dealbreakers though.