DevOps for Small GIS Companies

Aims of DevOps Adopters

When considering the methodologies for driving a software development team, it’s easy to get swept up by the momentum around DevOps. The excitement is justified, in that companies have achieved impressive code throughput, product quality, and cost reductions by applying the principles. However, translating DevOps principles to a specific organizational context is tricky. At one end of the spectrum, ambitious novices may try to directly apply some of the practices of industry leaders like Google. Without the necessary organizational experiences, resources, and culture, these initiatives will be doomed to failure from the start. On the other end of the spectrum, a practice or two may be brought into the organization, but the insignificance of the changes and a lack of cultural shift will result in an overall low ROI. How should companies, especially SMBs, approach the DevOps wave?

The goals of the DevOps methodology, in order of increasing maturity, are:

  1. Continuous Integration
  2. Continuous Deployment
  3. Continuous Delivery

Continuous integration is where a developer’s code contributions are automatically processed and integrated into the master code base. This processing could involve automated code scans, builds, and testing. The next level of DevOps achievement would be continuous deployment, where these code contributions are automatically taken further down the release pipeline - into production-like environment deployments. These developments can then be made available to the customer when it makes business sense - as all the technical hurdles are already overcome and the software is ready-to-go. Finally, continuous delivery extends the DevOps pipeline all the way to the customer - code contributions are automatically processed, deployed, and available to the customer.

Consider a pull request for a small new feature. The three systems would process this pull request as follows:

  1. Continuous integration - the pull request is scanned, built, and tested, then automatically merged if everything looks good.
  2. Continuous deployment - the pull request is scanned, built, tested, and (if results are good) deployed to production-like environments where it could become available to customers.
  3. Continuous delivery - the pull request is scanned, built, tested, deployed (if results are good), and made available to the customer for real use.

Organizations adopting DevOps principles seek to move along this progression in DevOps maturity. Naturally, the automated processes which enable DevOps workflows must be robust, reliable and highly-automated. This is where DevOps principles become DevOps practice. How can organizations architect their release tooling to enable these processes?


The practices that underpin DevOps workflows can be broken down functionally, but in Agile-fashion they should also be viewed as a whole and cross-functionally. For example, we can consider continuous measurement. This process involves automatic aggregation, analysis, and decision-making around data collected from the release pipeline. If a code scan suggests poor maintainability, that contribution can be blocked from further progress down the pipeline. Or if automated acceptance tests take longer than usual to run, we may stop that contribution until further improvements or analysis can be executed. Continuous measurement would likely fall in the monitoring/metrics/BI function, and more broadly each function involved in the release pipeline could have its own “continuous” practice. Consider:

  • Continuous testing
  • Continuous building
  • Continuous provisioning
  • Continuous measurement
  • Continuous scanning
  • Continuous deployment
  • Continuous monitoring

The idea is consistent across the disciplines - quickly generate the value from robust automated processes. The emphasis on automation encourages a “shift left” phenomenon, where everyone on the team is doing more “development-type” work. The quality assurance team spends less time with manual testing, and more time developing test scripts. The ops folks spend less time with provisioning servers, and more time developing provisioning scripts.

Continuous improvement differs from the previous examples in that it spans all disciplines and isn’t an aspect of automated processes in the release pipeline. It is nevertheless valuable and fits nicely into a DevOps workflow. When development processes are highly-automated and run on a continual basis, the point in time to evaluate processes and gather feedback is less obvious. Compare this to fixed-duration projects, where a post-mortem and other feedback activities will typically follow the conclusion of the project. This difference suggests that development efforts running on a continual basis need to be more deliberate about scheduling feedback, analysis, and reviews. To be clear, these DevOps workflows don’t forbid sprints and other fixed time-frame development windows. Rather, it just means there’s no clear temporal separation of testing, deployment, activities etc. Continuous improvement is about always looking for opportunities to learn and improve. For example, the build pipeline going down for a few hours could trigger some sort of retrospective meeting. Discussing processes and improvements may be more natural when issues are encountered, but the discussions should also take place when the team, systems, and/or processes perform exceptionally - such as a successful identification of malicious code during securing automated security scans.

For Small GIS Companies

For small companies with Geographic Information Systems (GIS) products, some practices in DevOps should be approached first for high ROI “quick wins”.

Data Management

GIS companies typically have large, complex collections of data for their applications. A naturally high value activity, within the DevOps methodology, is automated management of this data during the release pipeline. For example, if backups need to be created for data retention, those should be handled automatically in the release pipeline. Any data transformations, migrations, and cleanup that is typically part of the release process should be built into the DevOps pipeline as well.

Acceptance Testing

The functionality of GIS applications is relatively hard to break down into small testable components. Additionally, integration with external services is common. For these reasons, automated acceptance testing provides especially high value. Systems built on Selenium or Cypress, for example, can be worked into the DevOps pipeline for early identification of systems-level issues.


GIS applications often have tight integrations with external services. Further, the large amounts of data and processing in GIS applications can make them susceptible to performance drifts. With this in mind, automated monitoring (and subsequent automated alerting/action-taking) can provide high ROI for these applications.

Where to Start

DevOps practices can and should be adopted incrementally, but with enough rigor to see tangible results and change organizational attitudes. Start with some high value applications and processes, use the best tools, and make the change effort highly-visible. When the positive impact starts to be seen, future transformations will become increasingly easier.


[1] The DevOps Handbook. Gene Kim, Jez Humble, Patrick Debois, John Willis. IT Revolution Press 2016.