BULC (Bayesian Updating of Land Cover Classifications) is an algorithm developed in my lab to provide users with time series of land cover classifications. The algorithm ingests images that have been classified through time, without the need for each land classification to be near-perfect. BULC begins by assessing the agreement between each pair of classifications, and uses the amount of overlap as evidence that the new classification contains information worth integrating into its running record of land cover. BULC uses Bayesian statistics at each time step to update its estimate of land cover, producing a "posterior" classified map. This posterior map is the algorithm's best estimate of land cover at the time of the last input image.
In testing, we have seen that BULC is excellent for areas where single-image analysis is not viable (e.g., where it's consistently cloudy). It takes advantage of the wealth of remotely sensed data made available in recent years. Furthermore, it is quick: several good-quality classifications can be combined to create a higher-quality running record of land cover.
The BULC algorithm is in active development as a module within Earth Engine.
Below are the results for our BULC run tracing land-cover change over a 40-year period along the Roosevelt River, Mato Grosso, Brazil. This run incorporates more than 10 data sources at multiple spatial resolutions, and tracks very well with the observed changes in this area (shown below in an excerpt from Google's Timelapse imagery spanning 1972-2016).
Below is an early test-run of the algorithm, zoomed in to northwestern Las Vegas, documenting urban expansion. This run incorporated 11 Landsat 5 images, classified using regression tree analysis into the same categories as those on the National Land Cover Database (NLCD) products. We were then able to compare our final posterior from 2006 with the NLCD product for 2006, and found approximately 70% agreement between the two maps.
The work is sponsored in part by a Google Earth Engine Research Award. Below, you can see a talk at Google on this research.