fixing incorrect ncld

This commit is contained in:
lucasmbrown-usds 2022-09-09 08:23:24 -04:00
parent b0bea641ca
commit e325f3e28e
6 changed files with 8 additions and 160 deletions

View file

@ -313,6 +313,7 @@ def create_codebook(
columns={constants.CEJST_SCORE_COLUMN_NAME: "Description"} columns={constants.CEJST_SCORE_COLUMN_NAME: "Description"}
) )
# pylint: disable=too-many-arguments # pylint: disable=too-many-arguments
def compare_to_list_of_expected_state_fips_codes( def compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes: typing.List[str], actual_state_fips_codes: typing.List[str],

View file

@ -17,7 +17,9 @@ class WildfireRiskETL(ExtractTransformLoad):
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_fire.zip" SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_fire.zip"
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
PUERTO_RICO_EXPECTED_IN_DATA = False PUERTO_RICO_EXPECTED_IN_DATA = False
EXPECTED_MISSING_STATES = ['02', '15']
# Alaska and Hawaii are missing
EXPECTED_MISSING_STATES = ["02", "15"]
# Output score variables (values set on datasets.yml) for linting purposes # Output score variables (values set on datasets.yml) for linting purposes
COUNT_PROPERTIES: str COUNT_PROPERTIES: str

View file

@ -1,80 +0,0 @@
# Nature deprived communities data
The following dataset was compiled by TPL using NCLD data. We define as: AREA - [CROPLAND] - [IMPERVIOUS SURFACES].
## Codebook
- GEOID10 Census tract ID
- SF State Name
- CF County Name
- P200_PFS Percent of individuals below 200% Federal Poverty Line (from CEJST source data).
- CA_LT20 Percent higher ed enrollment rate is less than 20% (from CEJST source data).
- TractAcres Acres of tract calculated from ALAND10 field (area land/meters) in 2010 census tracts.
- CAVEAT: Some census tracts in the CEJST source file extend into open water. ALAND10 area was used to constrain percent calculations (e.g. cropland area) to land only.
- AcresCrops Acres crops calculated by summing all cells in the NLCD Cropland Data Layer crop classes.
- PctCrops Formula: AcresCrops/TractAcres*100.
- PctImperv Mean imperviousness for each census tract.
- CAVEAT: Where tracts extend into open water, mean imperviousness may be underestimated.
- __TO USE__ PctNatural Formula: 100 PctCrops PctImperv.
- PctNat90 Tract in or below 10th percentile for PctNatural. 1 = True, 0 = False.
- PctNatural 10th percentile = 28.6439%
- ImpOrCrop If tract >= 90th percentile for PctImperv OR PctCrops. 1 = True, 0 = False.
- PctImperv 90th percentile = 67.4146 %
- PctCrops 90th percentile = 27.8116 %
- LowInAndEd If tract >= 65th percentile for P200_PFS AND CA_LT20.
- P200_PFS 65th percentile = 64.0%
- NatureDep ImpOrCrp = 1 AND LowInAndEd = 1.
We added `GEOID10_TRACT` before converting shapefile to csv.
## Instructions to recreate
### Creating Impervious plus Cropland Attributes for Census Tracts
The Cropland Data Layer and NLCD Impervious layer were too big to put on our OneDrive, but you can download them here:
CDL: https://www.nass.usda.gov/Research_and_Science/Cropland/Release/datasets/2021_30m_cdls.zip
Impervious: https://s3-us-west-2.amazonaws.com/mrlc/nlcd_2019_impervious_l48_20210604.zip
#### Crops
Add an attribute called TractAcres (or similar) to the census tracts to hold a value representing acres covered by the census tract.
Calculate the TractAcres field for each census tract by using the Calculate Geometry tool (set the Property to Area (geodesic), and the Units to Acres).
From the Cropland Data Layer (CDL), extract only the pixels representing crops, using the Extract by Attributes tool in ArcGIS Spatial Analyst toolbox.
a. The attribute table tells you the names of each type of land cover. Since the CDL also contains NLCD classes and empty classes, the actual crop classes must be extracted.
From the crops-only raster extracted from the CDL, run the Reclassify tool to create a binary layer where all crops = 1, and everything else is Null.
Run the Tabulate Area tool:
a. Zone data = census tracts
b. Input raster data = the binary crops layer
c. This will produce a table with the square meters of crops in each census tract contained in an attribute called VALUE_1
Run the Join Field tool to join the table to the census tracts, with the VALUE_1 field as the Transfer Field, to transfer the VALUE_1 field (square meters of crops) to the census tracts.
Add a field to the census tracts called AcresCrops (or similar) to hold the acreage of crops in each census tract.
Calculate the AcresCrops field by multiplying the VALUE_1 field by 0.000247105 to produce acres of crops in each census tracts.
a. You can delete the VALUE_1 field.
Add a field called PctCrops (or similar) to hold the percent of each census tract occupied by crops.
Calculate the PctCrops field by dividing the AcresCrops field by the TractAcres field, and multiply by 100 to get the percent.
Impervious
Run the Zonal Statistics as Table tool:
a. Zone data = census tracts
b. Input raster data = impervious data raster layer
c. Statistics type = Mean
d. This will produce a table with the percent of each census tract occupied by impervious surfaces, contained in an attribute called MEAN
Run the Join Field tool to join the table to the census tracts, with the MEAN field as the Transfer Field, to transfer the MEAN field (percent impervious) to the census tracts.
Add a field called PctImperv (or similar) to hold the percent impervious value.
Calculate the PctImperv field by setting it equal to the MEAN field.
a. You can delete the MEAN field.
Combine the Crops and Impervious Data
Open the census tracts attribute table and add a field called PctNatural (or similar). Calculate this field using this equation: 100 PctCrops PctImperv . This produces a value that tells you the percent of each census tract covered in natural land cover.
Define the census tracts that fall in the 90th percentile of non-natural land cover:
a. Add a field called PctNat90 (or similar)
b. Right-click on the PctNatural field, and click Sort Ascending (lowest PctNatural values on top)
c. Select the top 10 percent of rows after the sort
d. Click on Show Selected Records in the attribute table
e. Calculate the PctNat90 field for the selected records = 1
f. Clear the selection
g. The rows that now have a value of 1 for PctNat90 are the most lacking for natural land cover, and can be symbolized accordingly in a map

View file

@ -1,79 +0,0 @@
# pylint: disable=unsubscriptable-object
# pylint: disable=unsupported-assignment-operation
import pandas as pd
from data_pipeline.config import settings
from data_pipeline.etl.base import ExtractTransformLoad, ValidGeoLevel
from data_pipeline.utils import get_module_logger
logger = get_module_logger(__name__)
class NatureDeprivedETL(ExtractTransformLoad):
"""ETL class for the Nature Deprived Communities dataset"""
NAME = "ncld_nature_deprived"
SOURCE_URL = (
settings.AWS_JUSTICE40_DATASOURCES_URL
+ "/usa_conus_nat_dep__compiled_by_TPL.csv.zip"
)
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
PUERTO_RICO_EXPECTED_IN_DATA = False
# Alaska and Hawaii are missing
EXPECTED_MISSING_STATES = ['02', '15']
# Output score variables (values set on datasets.yml) for linting purposes
ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME: str
TRACT_PERCENT_IMPERVIOUS_FIELD_NAME: str
TRACT_PERCENT_NON_NATURAL_FIELD_NAME: str
TRACT_PERCENT_CROPLAND_FIELD_NAME: str
def __init__(self):
# define the full path for the input CSV file
self.INPUT_CSV = (
self.get_tmp_path() / "usa_conus_nat_dep__compiled_by_TPL.csv"
)
# this is the main dataframe
self.df: pd.DataFrame
# Start dataset-specific vars here
self.PERCENT_NATURAL_FIELD_NAME = "PctNatural"
self.PERCENT_IMPERVIOUS_FIELD_NAME = "PctImperv"
self.PERCENT_CROPLAND_FIELD_NAME = "PctCrops"
self.TRACT_ACRES_FIELD_NAME = "TractAcres"
# In order to ensure that tracts with very small Acreage, we want to create an eligibility criterion
# similar to agrivalue. Here, we are ensuring that a tract has at least 35 acres, or is above the 1st percentile
# for area. This does indeed remove tracts from the 90th+ percentile later on
self.TRACT_ACRES_LOWER_BOUND = 35
def transform(self) -> None:
"""Reads the unzipped data file into memory and applies the following
transformations to prepare it for the load() method:
- Renames columns as needed
"""
logger.info("Transforming NCLD Data")
df_ncld: pd.DataFrame = pd.read_csv(
self.INPUT_CSV,
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
low_memory=False,
)
df_ncld[self.ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME] = (
df_ncld[self.TRACT_ACRES_FIELD_NAME] >= self.TRACT_ACRES_LOWER_BOUND
)
df_ncld[self.TRACT_PERCENT_NON_NATURAL_FIELD_NAME] = (
1 - df_ncld[self.PERCENT_NATURAL_FIELD_NAME]
)
# Assign the final df to the class' output_df for the load method with rename
self.output_df = df_ncld.rename(
columns={
self.PERCENT_IMPERVIOUS_FIELD_NAME: self.TRACT_PERCENT_IMPERVIOUS_FIELD_NAME,
self.PERCENT_CROPLAND_FIELD_NAME: self.TRACT_PERCENT_CROPLAND_FIELD_NAME,
}
)

View file

@ -19,6 +19,10 @@ class NatureDeprivedETL(ExtractTransformLoad):
+ "/usa_conus_nat_dep__compiled_by_TPL.csv.zip" + "/usa_conus_nat_dep__compiled_by_TPL.csv.zip"
) )
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
PUERTO_RICO_EXPECTED_IN_DATA = False
# Alaska and Hawaii are missing
EXPECTED_MISSING_STATES = ["02", "15"]
# Output score variables (values set on datasets.yml) for linting purposes # Output score variables (values set on datasets.yml) for linting purposes
ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME: str ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME: str