mirror of
https://github.com/DOI-DO/j40-cejst-2.git
synced 2025-02-25 11:04:19 -08:00
* Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848)
57 lines
1.6 KiB
Python
57 lines
1.6 KiB
Python
import zipfile
|
|
import pandas as pd
|
|
|
|
from data_pipeline.config import settings
|
|
from data_pipeline.etl.base import ExtractTransformLoad, ValidGeoLevel
|
|
from data_pipeline.utils import get_module_logger
|
|
|
|
logger = get_module_logger(__name__)
|
|
|
|
|
|
class ExampleETL(ExtractTransformLoad):
|
|
"""A test-only, simple implementation of the ETL base class.
|
|
|
|
This can be used for the base tests of the `TestETL` class.
|
|
"""
|
|
|
|
INPUT_FIELD_NAME = "Input Field 1"
|
|
EXAMPLE_FIELD_NAME = "Example Field 1"
|
|
|
|
NAME = "example_dataset"
|
|
LAST_UPDATED_YEAR = 2017
|
|
SOURCE_URL = "https://www.example.com/example.zip"
|
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
|
LOAD_YAML_CONFIG: bool = True
|
|
|
|
def __init__(self):
|
|
self.COLUMNS_TO_KEEP = [
|
|
self.GEOID_TRACT_FIELD_NAME,
|
|
self.EXAMPLE_FIELD_NAME,
|
|
]
|
|
|
|
def extract(self):
|
|
# Pretend to download zip from external URL, write it to CSV.
|
|
zip_file_path = (
|
|
settings.APP_ROOT
|
|
/ "tests"
|
|
/ "sources"
|
|
/ "example"
|
|
/ "data"
|
|
/ "input.zip"
|
|
)
|
|
|
|
logger.info(f"Extracting {zip_file_path}")
|
|
with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
|
|
zip_ref.extractall(self.get_tmp_path())
|
|
|
|
def transform(self):
|
|
logger.info(f"Loading file from {self.get_tmp_path() / 'input.csv'}.")
|
|
df: pd.DataFrame = pd.read_csv(
|
|
self.get_tmp_path() / "input.csv",
|
|
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
|
low_memory=False,
|
|
)
|
|
|
|
df[self.EXAMPLE_FIELD_NAME] = df[self.INPUT_FIELD_NAME] * 2
|
|
|
|
self.output_df = df
|