Fixes: https://github.com/usds/justice40-tool/issues/947 This is ensuring we at least have some way to rollback in the worst case scenario of the data being corrupted somehow, and our upstream data sources being down in a way that prevents us from regenerating any datasets. See https://github.com/usds/justice40-tool/issues/946#issuecomment-989155252 for an analysis of the different ways the pipeline can fail and their impacts. This guide is very brittle to pipeline changes, and should be considered a temporary solution for the worst case. I think that someday the github actions should run more frequently and write to different paths, perhaps with a timestamp. That would make rolling back more straightforward, as a previous version of the data would already exist at some path in s3.
6.7 KiB
Rollback Plan
Note: This guide is up to date as of this commit, 12/16/2021. If you want to sanity check that this guide is still relevant, go to Rollback Details.
When To Rollback?
If for some reason the final map data that has been generated by the pipeline has become incorrect or is missing, this page documents the emergency steps to get the data back to a known good state.
Rollback Theory
The theory of rollback depends on two things:
- The s3 bucket containing our data uses s3 bucket versioning, allowing us to revert specific files back to a previous version.
- The Combine and Tileify step consumes only two files in s3 as input, making the strategy of reverting those files to a previous version a feasible way to run this job against a previous version.
If you feel confident in this and want to do the rollback now, proceed to Rollback Steps.
If you want to understand more deeply what's going on, and sanity check that the code hasn't changed since this guide was written, go to Rollback Details.
Rollback Steps
1. List Object Versions
aws s3api list-object-versions \
--bucket justice40-data \
--prefix data-sources/census.zip \
--query "Versions[*].{Key:Key,VersionId:VersionId,LastModified:LastModified}"
You should get something like:
[
...
{
"Key": "data-sources/census.zip",
"VersionId": "<version_id>",
"LastModified": "2021-11-29T18:57:40+00:00"
}
...
]
Do the same thing with the score file:
aws s3api list-object-versions \
--bucket justice40-data \
--prefix data-pipeline/data/score/csv/tiles/usa.csv \
--query "Versions[*].{Key:Key,VersionId:VersionId,LastModified:LastModified}"
2. Download Previous Version
Based on the output from the commands above, select the <version_id>
of the known good version of that file, and run the following command to download it:
aws s3api get-object \
--bucket justice40-data \
--key data-sources/census.zip \
--version-id <version_id> \
census.zip
Do the same for the score file:
aws s3api get-object \
--bucket justice40-data \
--key data-pipeline/data/score/csv/tiles/usa.csv \
--version-id <version_id> \
usa.csv
Note: This command doesn't give you feedback like
curl
does on download progress, it just sits there. To verify the download is happening, open another terminal and check the output file size.
3. Upload New Version
After you've verified that the local files are correct, you can overwrite the live versions by running a normal s3 copy:
aws s3 cp census.zip s3://justice40-data/data-sources/census.zip
aws s3 cp usa.csv s3://justice40-data/data-pipeline/data/score/csv/tiles/usa.csv
4. Rerun Combine and Tileify
Run the Combine and Tileify Github Action to regenerate the map tiles from the data you just rolled back.
Rollback Details
Note: The links to the relevant code are included alongside the relevant snippets as a defense against this page becoming outdated. Make sure the code matches what the links point to, and verify that things still work as this guide assumes.
The relevant step that consumes the files from s3 should be in the Combine and Tileify Job:
- name: Run Scripts
run: |
poetry run python3 data_pipeline/application.py geo-score -s aws
poetry run python3 data_pipeline/application.py generate-map-tiles
This runs the score_geo
task:
score_geo(data_source=data_source)
This uses the GeoScoreETL
object to run the etl:
score_geo = GeoScoreETL(data_source=data_source)
score_geo.extract()
score_geo.transform()
score_geo.load()
In the extract step, GeoScoreETL
calls check_census_data
and check_score_data
:
# check census data
check_census_data_source(
census_data_path=self.DATA_PATH / "census",
census_data_source=self.DATA_SOURCE,
)
# check score data
check_score_data_source(
score_csv_data_path=self.SCORE_CSV_PATH,
score_data_source=self.DATA_SOURCE,
)
The check_score_data_source
function downloads one file from s3:
TILE_SCORE_CSV_S3_URL = (
settings.AWS_JUSTICE40_DATAPIPELINE_URL
+ "/data/score/csv/tiles/usa.csv"
)
TILE_SCORE_CSV = score_csv_data_path / "tiles" / "usa.csv"
# download from s3 if census_data_source is aws
if score_data_source == "aws":
logger.info("Fetching Score Tile data from AWS S3")
download_file_from_url(
file_url=TILE_SCORE_CSV_S3_URL, download_file_name=TILE_SCORE_CSV
)
This can be found here:
% aws s3 ls s3://justice40-data/data-pipeline/data/score/csv/tiles/usa.csv
2021-12-13 15:23:49 27845542 usa.csv
% curl --head https://justice40-data.s3.amazonaws.com/data-pipeline/data/score/csv/tiles/usa.csv
HTTP/1.1 200 OK
...
The check_census_data_source
function downloads one file from s3:
CENSUS_DATA_S3_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/census.zip"
DATA_PATH = settings.APP_ROOT / "data"
# download from s3 if census_data_source is aws
if census_data_source == "aws":
logger.info("Fetching Census data from AWS S3")
unzip_file_from_url(
CENSUS_DATA_S3_URL,
DATA_PATH / "tmp",
DATA_PATH,
)
This can be found here:
% aws s3 ls s3://justice40-data/data-sources/census.zip
2021-11-29 13:57:40 845390373 census.zip
% curl --head https://justice40-data.s3.amazonaws.com/data-sources/census.zip
HTTP/1.1 200 OK
...
So this is how to see that the pipeline needs those two files. You can also confirm this in the Combine and Tileify Github Action logs.
If you feel confident in this and want to do the rollback now, proceed to Rollback Steps.