mirror of
https://github.com/DOI-DO/j40-cejst-2.git
synced 2025-07-26 18:11:16 -07:00
Backend release branch to main (#1822)
* Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
This commit is contained in:
parent
db75b8ae76
commit
b97e60bfbb
285 changed files with 20485 additions and 3880 deletions
|
@ -1,8 +1,7 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -1,58 +1,148 @@
|
|||
import pathlib
|
||||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger, download_file_from_url
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.etl.score.etl_utils import (
|
||||
compare_to_list_of_expected_state_fips_codes,
|
||||
)
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import download_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class CDCLifeExpectancy(ExtractTransformLoad):
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
|
||||
NAME = "cdc_life_expectancy"
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
USA_FILE_URL = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/US_A.CSV"
|
||||
else:
|
||||
USA_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/US_A.CSV"
|
||||
|
||||
LOAD_YAML_CONFIG: bool = False
|
||||
LIFE_EXPECTANCY_FIELD_NAME = "Life expectancy (years)"
|
||||
INPUT_GEOID_TRACT_FIELD_NAME = "Tract ID"
|
||||
|
||||
STATES_MISSING_FROM_USA_FILE = ["23", "55"]
|
||||
|
||||
# For some reason, LEEP does not include Maine or Wisconsin in its "All of
|
||||
# USA" file. Load these separately.
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
WISCONSIN_FILE_URL: str = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/WI_A.CSV"
|
||||
MAINE_FILE_URL: str = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/ME_A.CSV"
|
||||
else:
|
||||
WISCONSIN_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/WI_A.CSV"
|
||||
MAINE_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/ME_A.CSV"
|
||||
|
||||
TRACT_INPUT_COLUMN_NAME = "Tract ID"
|
||||
STATE_INPUT_COLUMN_NAME = "STATE2KX"
|
||||
|
||||
raw_df: pd.DataFrame
|
||||
output_df: pd.DataFrame
|
||||
|
||||
def __init__(self):
|
||||
self.FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/US_A.CSV"
|
||||
self.OUTPUT_PATH: Path = (
|
||||
self.DATA_PATH / "dataset" / "cdc_life_expectancy"
|
||||
)
|
||||
|
||||
self.TRACT_INPUT_COLUMN_NAME = "Tract ID"
|
||||
self.LIFE_EXPECTANCY_FIELD_NAME = "Life expectancy (years)"
|
||||
|
||||
# Constants for output
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.LIFE_EXPECTANCY_FIELD_NAME,
|
||||
field_names.LIFE_EXPECTANCY_FIELD,
|
||||
]
|
||||
|
||||
self.raw_df: pd.DataFrame
|
||||
self.output_df: pd.DataFrame
|
||||
|
||||
def extract(self) -> None:
|
||||
logger.info("Starting data download.")
|
||||
|
||||
download_file_name = (
|
||||
self.get_tmp_path() / "cdc_life_expectancy" / "usa.csv"
|
||||
)
|
||||
def _download_and_prep_data(
|
||||
self, file_url: str, download_file_name: pathlib.Path
|
||||
) -> pd.DataFrame:
|
||||
download_file_from_url(
|
||||
file_url=self.FILE_URL,
|
||||
file_url=file_url,
|
||||
download_file_name=download_file_name,
|
||||
verify=True,
|
||||
)
|
||||
|
||||
self.raw_df = pd.read_csv(
|
||||
df = pd.read_csv(
|
||||
filepath_or_buffer=download_file_name,
|
||||
dtype={
|
||||
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
||||
self.TRACT_INPUT_COLUMN_NAME: "string",
|
||||
self.STATE_INPUT_COLUMN_NAME: "string",
|
||||
},
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
return df
|
||||
|
||||
def extract(self) -> None:
|
||||
logger.info("Starting data download.")
|
||||
|
||||
all_usa_raw_df = self._download_and_prep_data(
|
||||
file_url=self.USA_FILE_URL,
|
||||
download_file_name=self.get_tmp_path() / "US_A.CSV",
|
||||
)
|
||||
|
||||
# Check which states are missing
|
||||
states_in_life_expectancy_usa_file = list(
|
||||
all_usa_raw_df[self.STATE_INPUT_COLUMN_NAME].unique()
|
||||
)
|
||||
|
||||
# Expect that PR, Island Areas, and Maine/Wisconsin are missing
|
||||
compare_to_list_of_expected_state_fips_codes(
|
||||
actual_state_fips_codes=states_in_life_expectancy_usa_file,
|
||||
continental_us_expected=self.CONTINENTAL_US_EXPECTED_IN_DATA,
|
||||
puerto_rico_expected=self.PUERTO_RICO_EXPECTED_IN_DATA,
|
||||
island_areas_expected=self.ISLAND_AREAS_EXPECTED_IN_DATA,
|
||||
additional_fips_codes_not_expected=self.STATES_MISSING_FROM_USA_FILE,
|
||||
)
|
||||
|
||||
logger.info("Downloading data for Maine")
|
||||
maine_raw_df = self._download_and_prep_data(
|
||||
file_url=self.MAINE_FILE_URL,
|
||||
download_file_name=self.get_tmp_path() / "maine.csv",
|
||||
)
|
||||
|
||||
logger.info("Downloading data for Wisconsin")
|
||||
wisconsin_raw_df = self._download_and_prep_data(
|
||||
file_url=self.WISCONSIN_FILE_URL,
|
||||
download_file_name=self.get_tmp_path() / "wisconsin.csv",
|
||||
)
|
||||
|
||||
combined_df = pd.concat(
|
||||
objs=[all_usa_raw_df, maine_raw_df, wisconsin_raw_df],
|
||||
ignore_index=True,
|
||||
verify_integrity=True,
|
||||
axis=0,
|
||||
)
|
||||
|
||||
states_in_combined_df = list(
|
||||
combined_df[self.STATE_INPUT_COLUMN_NAME].unique()
|
||||
)
|
||||
|
||||
# Expect that PR and Island Areas are the only things now missing
|
||||
compare_to_list_of_expected_state_fips_codes(
|
||||
actual_state_fips_codes=states_in_combined_df,
|
||||
continental_us_expected=self.CONTINENTAL_US_EXPECTED_IN_DATA,
|
||||
puerto_rico_expected=self.PUERTO_RICO_EXPECTED_IN_DATA,
|
||||
island_areas_expected=self.ISLAND_AREAS_EXPECTED_IN_DATA,
|
||||
additional_fips_codes_not_expected=[],
|
||||
)
|
||||
|
||||
# Save the updated version
|
||||
self.raw_df = combined_df
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting DOE energy burden transform.")
|
||||
logger.info("Starting CDC life expectancy transform.")
|
||||
|
||||
self.output_df = self.raw_df.rename(
|
||||
columns={
|
||||
"e(0)": self.LIFE_EXPECTANCY_FIELD_NAME,
|
||||
"e(0)": field_names.LIFE_EXPECTANCY_FIELD,
|
||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
}
|
||||
)
|
||||
|
|
|
@ -1,20 +1,45 @@
|
|||
import pandas as pd
|
||||
import typing
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger, download_file_from_url
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import download_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class CDCPlacesETL(ExtractTransformLoad):
|
||||
NAME = "cdc_places"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
|
||||
CDC_GEOID_FIELD_NAME = "LocationID"
|
||||
CDC_VALUE_FIELD_NAME = "Data_Value"
|
||||
CDC_MEASURE_FIELD_NAME = "Measure"
|
||||
|
||||
def __init__(self):
|
||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_places"
|
||||
|
||||
self.CDC_PLACES_URL = "https://chronicdata.cdc.gov/api/views/cwsq-ngmh/rows.csv?accessType=DOWNLOAD"
|
||||
self.CDC_GEOID_FIELD_NAME = "LocationID"
|
||||
self.CDC_VALUE_FIELD_NAME = "Data_Value"
|
||||
self.CDC_MEASURE_FIELD_NAME = "Measure"
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.CDC_PLACES_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"cdc_places/PLACES__Local_Data_for_Better_Health__Census_Tract_Data_2021_release.csv"
|
||||
)
|
||||
else:
|
||||
self.CDC_PLACES_URL = "https://chronicdata.cdc.gov/api/views/cwsq-ngmh/rows.csv?accessType=DOWNLOAD"
|
||||
|
||||
self.COLUMNS_TO_KEEP: typing.List[str] = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.DIABETES_FIELD,
|
||||
field_names.ASTHMA_FIELD,
|
||||
field_names.HEART_DISEASE_FIELD,
|
||||
field_names.CANCER_FIELD,
|
||||
field_names.HEALTH_INSURANCE_FIELD,
|
||||
field_names.PHYS_HEALTH_NOT_GOOD_FIELD,
|
||||
]
|
||||
|
||||
self.df: pd.DataFrame
|
||||
|
||||
|
@ -22,9 +47,7 @@ class CDCPlacesETL(ExtractTransformLoad):
|
|||
logger.info("Starting to download 520MB CDC Places file.")
|
||||
file_path = download_file_from_url(
|
||||
file_url=self.CDC_PLACES_URL,
|
||||
download_file_name=self.get_tmp_path()
|
||||
/ "cdc_places"
|
||||
/ "census_tract.csv",
|
||||
download_file_name=self.get_tmp_path() / "census_tract.csv",
|
||||
)
|
||||
|
||||
self.df = pd.read_csv(
|
||||
|
@ -42,7 +65,6 @@ class CDCPlacesETL(ExtractTransformLoad):
|
|||
inplace=True,
|
||||
errors="raise",
|
||||
)
|
||||
|
||||
# Note: Puerto Rico not included.
|
||||
self.df = self.df.pivot(
|
||||
index=self.GEOID_TRACT_FIELD_NAME,
|
||||
|
@ -65,12 +87,4 @@ class CDCPlacesETL(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
# Make the index (the census tract ID) a column, not the index.
|
||||
self.df.reset_index(inplace=True)
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving CDC Places Data")
|
||||
|
||||
# mkdir census
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
||||
self.output_df = self.df.reset_index()
|
||||
|
|
|
@ -53,7 +53,7 @@ For SVI 2018, the authors also included two adjunct variables, 1) 2014-2018 ACS
|
|||
|
||||
**Important Notes**
|
||||
|
||||
1. Tracts with zero estimates for the total population (N = 645 for the U.S.) were removed during the ranking process. These tracts were added back to the SVI databases after ranking.
|
||||
1. Tracts with zero estimates for the total population (N = 645 for the U.S.) were removed during the ranking process. These tracts were added back to the SVI databases after ranking.
|
||||
|
||||
2. The TOTPOP field value is 0, but the percentile ranking fields (RPL_THEME1, RPL_THEME2, RPL_THEME3, RPL_THEME4, and RPL_THEMES) were set to -999.
|
||||
|
||||
|
@ -66,4 +66,4 @@ here: https://www.census.gov/programs-surveys/acs/data/variance-tables.html.
|
|||
|
||||
For selected ACS 5-year Detailed Tables, “Users can calculate margins of error for aggregated data by using the variance replicates. Unlike available approximation formulas, this method results in an exact margin of error by using the covariance term.”
|
||||
|
||||
MOEs are _not_ included nor considered during this data processing nor for the scoring comparison tool.
|
||||
MOEs are _not_ included nor considered during this data processing nor for the scoring comparison tool.
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
import pandas as pd
|
||||
import numpy as np
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -17,7 +17,13 @@ class CDCSVIIndex(ExtractTransformLoad):
|
|||
def __init__(self):
|
||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_svi_index"
|
||||
|
||||
self.CDC_SVI_INDEX_URL = "https://svi.cdc.gov/Documents/Data/2018_SVI_Data/CSV/SVI2018_US.csv"
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.CDC_SVI_INDEX_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"cdc_svi_index/SVI2018_US.csv"
|
||||
)
|
||||
else:
|
||||
self.CDC_SVI_INDEX_URL = "https://svi.cdc.gov/Documents/Data/2018_SVI_Data/CSV/SVI2018_US.csv"
|
||||
|
||||
self.CDC_RPL_THEMES_THRESHOLD = 0.90
|
||||
|
||||
|
|
|
@ -3,12 +3,12 @@ import json
|
|||
import subprocess
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
import geopandas as gpd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
|
||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -20,19 +20,20 @@ class GeoFileType(Enum):
|
|||
|
||||
|
||||
class CensusETL(ExtractTransformLoad):
|
||||
SHP_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "shp"
|
||||
GEOJSON_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "geojson"
|
||||
CSV_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "csv"
|
||||
GEOJSON_PATH = ExtractTransformLoad.DATA_PATH / "census" / "geojson"
|
||||
NATIONAL_TRACT_CSV_PATH = CSV_BASE_PATH / "us.csv"
|
||||
NATIONAL_TRACT_JSON_PATH = GEOJSON_BASE_PATH / "us.json"
|
||||
GEOID_TRACT_FIELD_NAME: str = "GEOID10_TRACT"
|
||||
|
||||
def __init__(self):
|
||||
self.SHP_BASE_PATH = self.DATA_PATH / "census" / "shp"
|
||||
self.GEOJSON_BASE_PATH = self.DATA_PATH / "census" / "geojson"
|
||||
self.CSV_BASE_PATH = self.DATA_PATH / "census" / "csv"
|
||||
# the fips_states_2010.csv is generated from data here
|
||||
# https://www.census.gov/geographies/reference-files/time-series/geo/tallies.html
|
||||
self.STATE_FIPS_CODES = get_state_fips_codes(self.DATA_PATH)
|
||||
self.GEOJSON_PATH = self.DATA_PATH / "census" / "geojson"
|
||||
self.TRACT_PER_STATE: dict = {} # in-memory dict per state
|
||||
self.TRACT_NATIONAL: list = [] # in-memory global list
|
||||
self.NATIONAL_TRACT_CSV_PATH = self.CSV_BASE_PATH / "us.csv"
|
||||
self.NATIONAL_TRACT_JSON_PATH = self.GEOJSON_BASE_PATH / "us.json"
|
||||
self.GEOID_TRACT_FIELD_NAME: str = "GEOID10_TRACT"
|
||||
|
||||
def _path_for_fips_file(
|
||||
self, fips_code: str, file_type: GeoFileType
|
||||
|
|
|
@ -5,13 +5,11 @@ from pathlib import Path
|
|||
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.utils import (
|
||||
get_module_logger,
|
||||
remove_all_dirs_from_dir,
|
||||
remove_files_from_dir,
|
||||
unzip_file_from_url,
|
||||
zip_directory,
|
||||
)
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import remove_all_dirs_from_dir
|
||||
from data_pipeline.utils import remove_files_from_dir
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
from data_pipeline.utils import zip_directory
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -1,22 +1,33 @@
|
|||
import pandas as pd
|
||||
import os
|
||||
from collections import namedtuple
|
||||
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.sources.census_acs.etl_imputations import (
|
||||
calculate_income_measures,
|
||||
)
|
||||
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
||||
retrieve_census_acs_data,
|
||||
)
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
# because now there is a requirement for the us.json, this will port from
|
||||
# AWS when a local copy does not exist.
|
||||
CENSUS_DATA_S3_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/census.zip"
|
||||
|
||||
|
||||
class CensusACSETL(ExtractTransformLoad):
|
||||
def __init__(self):
|
||||
self.ACS_YEAR = 2019
|
||||
self.OUTPUT_PATH = (
|
||||
self.DATA_PATH / "dataset" / f"census_acs_{self.ACS_YEAR}"
|
||||
)
|
||||
NAME = "census_acs"
|
||||
ACS_YEAR = 2019
|
||||
MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION = 1
|
||||
|
||||
def __init__(self):
|
||||
self.TOTAL_UNEMPLOYED_FIELD = "B23025_005E"
|
||||
self.TOTAL_IN_LABOR_FORCE = "B23025_003E"
|
||||
self.EMPLOYMENT_FIELDS = [
|
||||
|
@ -59,6 +70,23 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||
"Percent of individuals < 200% Federal Poverty Line"
|
||||
)
|
||||
self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||
"Percent of individuals < 200% Federal Poverty Line, imputed"
|
||||
)
|
||||
|
||||
self.ADJUSTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||
"Adjusted percent of individuals < 200% Federal Poverty Line"
|
||||
)
|
||||
|
||||
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME_PRELIMINARY = (
|
||||
"Preliminary adjusted percent of individuals < 200% Federal Poverty Line,"
|
||||
+ " imputed"
|
||||
)
|
||||
|
||||
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||
"Adjusted percent of individuals < 200% Federal Poverty Line,"
|
||||
+ " imputed"
|
||||
)
|
||||
|
||||
self.MEDIAN_HOUSE_VALUE_FIELD = "B25077_001E"
|
||||
self.MEDIAN_HOUSE_VALUE_FIELD_NAME = (
|
||||
|
@ -136,6 +164,10 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
"Percent enrollment in college or graduate school"
|
||||
)
|
||||
|
||||
self.IMPUTED_COLLEGE_ATTENDANCE_FIELD = (
|
||||
"Percent enrollment in college or graduate school, imputed"
|
||||
)
|
||||
|
||||
self.COLLEGE_NON_ATTENDANCE_FIELD = "Percent of population not currently enrolled in college or graduate school"
|
||||
|
||||
self.RE_FIELDS = [
|
||||
|
@ -153,19 +185,25 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
"B03002_003E",
|
||||
"B03003_001E",
|
||||
"B03003_003E",
|
||||
"B02001_007E", # "Some other race alone"
|
||||
]
|
||||
|
||||
# Name output demographics fields.
|
||||
self.BLACK_FIELD_NAME = "Black or African American alone"
|
||||
self.AMERICAN_INDIAN_FIELD_NAME = (
|
||||
"American Indian and Alaska Native alone"
|
||||
)
|
||||
self.ASIAN_FIELD_NAME = "Asian alone"
|
||||
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian and Other Pacific alone"
|
||||
self.TWO_OR_MORE_RACES_FIELD_NAME = "Two or more races"
|
||||
self.NON_HISPANIC_WHITE_FIELD_NAME = "Non-Hispanic White"
|
||||
self.BLACK_FIELD_NAME = "Black or African American"
|
||||
self.AMERICAN_INDIAN_FIELD_NAME = "American Indian / Alaska Native"
|
||||
self.ASIAN_FIELD_NAME = "Asian"
|
||||
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian or Pacific"
|
||||
self.TWO_OR_MORE_RACES_FIELD_NAME = "two or more races"
|
||||
self.NON_HISPANIC_WHITE_FIELD_NAME = "White"
|
||||
self.HISPANIC_FIELD_NAME = "Hispanic or Latino"
|
||||
# Note that `other` is lowercase because the whole field will show up in the download
|
||||
# file as "Percent other races"
|
||||
self.OTHER_RACE_FIELD_NAME = "other races"
|
||||
|
||||
self.TOTAL_RACE_POPULATION_FIELD_NAME = (
|
||||
"Total population surveyed on racial data"
|
||||
)
|
||||
|
||||
# Name output demographics fields.
|
||||
self.RE_OUTPUT_FIELDS = [
|
||||
self.BLACK_FIELD_NAME,
|
||||
self.AMERICAN_INDIAN_FIELD_NAME,
|
||||
|
@ -174,32 +212,133 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||
self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||
self.HISPANIC_FIELD_NAME,
|
||||
self.OTHER_RACE_FIELD_NAME,
|
||||
]
|
||||
|
||||
self.PERCENT_PREFIX = "Percent "
|
||||
# Note: this field does double-duty here. It's used as the total population
|
||||
# within the age questions.
|
||||
# It's also what EJScreen used as their variable for total population in the
|
||||
# census tract, so we use it similarly.
|
||||
# See p. 83 of https://www.epa.gov/sites/default/files/2021-04/documents/ejscreen_technical_document.pdf
|
||||
self.TOTAL_POPULATION_FROM_AGE_TABLE = "B01001_001E" # Estimate!!Total:
|
||||
|
||||
self.AGE_INPUT_FIELDS = [
|
||||
self.TOTAL_POPULATION_FROM_AGE_TABLE,
|
||||
"B01001_003E", # Estimate!!Total:!!Male:!!Under 5 years
|
||||
"B01001_004E", # Estimate!!Total:!!Male:!!5 to 9 years
|
||||
"B01001_005E", # Estimate!!Total:!!Male:!!10 to 14 years
|
||||
"B01001_006E", # Estimate!!Total:!!Male:!!15 to 17 years
|
||||
"B01001_007E", # Estimate!!Total:!!Male:!!18 and 19 years
|
||||
"B01001_008E", # Estimate!!Total:!!Male:!!20 years
|
||||
"B01001_009E", # Estimate!!Total:!!Male:!!21 years
|
||||
"B01001_010E", # Estimate!!Total:!!Male:!!22 to 24 years
|
||||
"B01001_011E", # Estimate!!Total:!!Male:!!25 to 29 years
|
||||
"B01001_012E", # Estimate!!Total:!!Male:!!30 to 34 years
|
||||
"B01001_013E", # Estimate!!Total:!!Male:!!35 to 39 years
|
||||
"B01001_014E", # Estimate!!Total:!!Male:!!40 to 44 years
|
||||
"B01001_015E", # Estimate!!Total:!!Male:!!45 to 49 years
|
||||
"B01001_016E", # Estimate!!Total:!!Male:!!50 to 54 years
|
||||
"B01001_017E", # Estimate!!Total:!!Male:!!55 to 59 years
|
||||
"B01001_018E", # Estimate!!Total:!!Male:!!60 and 61 years
|
||||
"B01001_019E", # Estimate!!Total:!!Male:!!62 to 64 years
|
||||
"B01001_020E", # Estimate!!Total:!!Male:!!65 and 66 years
|
||||
"B01001_021E", # Estimate!!Total:!!Male:!!67 to 69 years
|
||||
"B01001_022E", # Estimate!!Total:!!Male:!!70 to 74 years
|
||||
"B01001_023E", # Estimate!!Total:!!Male:!!75 to 79 years
|
||||
"B01001_024E", # Estimate!!Total:!!Male:!!80 to 84 years
|
||||
"B01001_025E", # Estimate!!Total:!!Male:!!85 years and over
|
||||
"B01001_027E", # Estimate!!Total:!!Female:!!Under 5 years
|
||||
"B01001_028E", # Estimate!!Total:!!Female:!!5 to 9 years
|
||||
"B01001_029E", # Estimate!!Total:!!Female:!!10 to 14 years
|
||||
"B01001_030E", # Estimate!!Total:!!Female:!!15 to 17 years
|
||||
"B01001_031E", # Estimate!!Total:!!Female:!!18 and 19 years
|
||||
"B01001_032E", # Estimate!!Total:!!Female:!!20 years
|
||||
"B01001_033E", # Estimate!!Total:!!Female:!!21 years
|
||||
"B01001_034E", # Estimate!!Total:!!Female:!!22 to 24 years
|
||||
"B01001_035E", # Estimate!!Total:!!Female:!!25 to 29 years
|
||||
"B01001_036E", # Estimate!!Total:!!Female:!!30 to 34 years
|
||||
"B01001_037E", # Estimate!!Total:!!Female:!!35 to 39 years
|
||||
"B01001_038E", # Estimate!!Total:!!Female:!!40 to 44 years
|
||||
"B01001_039E", # Estimate!!Total:!!Female:!!45 to 49 years
|
||||
"B01001_040E", # Estimate!!Total:!!Female:!!50 to 54 years
|
||||
"B01001_041E", # Estimate!!Total:!!Female:!!55 to 59 years
|
||||
"B01001_042E", # Estimate!!Total:!!Female:!!60 and 61 years
|
||||
"B01001_043E", # Estimate!!Total:!!Female:!!62 to 64 years
|
||||
"B01001_044E", # Estimate!!Total:!!Female:!!65 and 66 years
|
||||
"B01001_045E", # Estimate!!Total:!!Female:!!67 to 69 years
|
||||
"B01001_046E", # Estimate!!Total:!!Female:!!70 to 74 years
|
||||
"B01001_047E", # Estimate!!Total:!!Female:!!75 to 79 years
|
||||
"B01001_048E", # Estimate!!Total:!!Female:!!80 to 84 years
|
||||
"B01001_049E", # Estimate!!Total:!!Female:!!85 years and over
|
||||
]
|
||||
|
||||
self.AGE_OUTPUT_FIELDS = [
|
||||
field_names.PERCENT_AGE_UNDER_10,
|
||||
field_names.PERCENT_AGE_10_TO_64,
|
||||
field_names.PERCENT_AGE_OVER_64,
|
||||
]
|
||||
|
||||
self.STATE_GEOID_FIELD_NAME = "GEOID2"
|
||||
|
||||
self.COLUMNS_TO_KEEP = (
|
||||
[
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.TOTAL_POP_FIELD,
|
||||
self.UNEMPLOYED_FIELD_NAME,
|
||||
self.LINGUISTIC_ISOLATION_FIELD_NAME,
|
||||
self.MEDIAN_INCOME_FIELD_NAME,
|
||||
self.POVERTY_LESS_THAN_100_PERCENT_FPL_FIELD_NAME,
|
||||
self.POVERTY_LESS_THAN_150_PERCENT_FPL_FIELD_NAME,
|
||||
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||
self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||
self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
||||
self.HIGH_SCHOOL_ED_FIELD,
|
||||
self.COLLEGE_ATTENDANCE_FIELD,
|
||||
self.COLLEGE_NON_ATTENDANCE_FIELD,
|
||||
self.IMPUTED_COLLEGE_ATTENDANCE_FIELD,
|
||||
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME,
|
||||
]
|
||||
+ self.RE_OUTPUT_FIELDS
|
||||
+ [self.PERCENT_PREFIX + field for field in self.RE_OUTPUT_FIELDS]
|
||||
+ [
|
||||
field_names.PERCENT_PREFIX + field
|
||||
for field in self.RE_OUTPUT_FIELDS
|
||||
]
|
||||
+ self.AGE_OUTPUT_FIELDS
|
||||
+ [
|
||||
field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
||||
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
|
||||
]
|
||||
)
|
||||
|
||||
self.df: pd.DataFrame
|
||||
|
||||
# pylint: disable=too-many-arguments
|
||||
def _merge_geojson(
|
||||
self,
|
||||
df: pd.DataFrame,
|
||||
usa_geo_df: gpd.GeoDataFrame,
|
||||
geoid_field: str = "GEOID10",
|
||||
geometry_field: str = "geometry",
|
||||
state_code_field: str = "STATEFP10",
|
||||
county_code_field: str = "COUNTYFP10",
|
||||
) -> gpd.GeoDataFrame:
|
||||
usa_geo_df[geoid_field] = (
|
||||
usa_geo_df[geoid_field].astype(str).str.zfill(11)
|
||||
)
|
||||
return gpd.GeoDataFrame(
|
||||
df.merge(
|
||||
usa_geo_df[
|
||||
[
|
||||
geoid_field,
|
||||
geometry_field,
|
||||
state_code_field,
|
||||
county_code_field,
|
||||
]
|
||||
],
|
||||
left_on=[self.GEOID_TRACT_FIELD_NAME],
|
||||
right_on=[geoid_field],
|
||||
)
|
||||
)
|
||||
|
||||
def extract(self) -> None:
|
||||
# Define the variables to retrieve
|
||||
variables = (
|
||||
|
@ -213,6 +352,7 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
+ self.EDUCATIONAL_FIELDS
|
||||
+ self.RE_FIELDS
|
||||
+ self.COLLEGE_ATTENDANCE_FIELDS
|
||||
+ self.AGE_INPUT_FIELDS
|
||||
)
|
||||
|
||||
self.df = retrieve_census_acs_data(
|
||||
|
@ -227,12 +367,37 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
|
||||
df = self.df
|
||||
|
||||
# Rename two fields.
|
||||
# Here we join the geometry of the US to the dataframe so that we can impute
|
||||
# The income of neighbors. first this looks locally; if there's no local
|
||||
# geojson file for all of the US, this will read it off of S3
|
||||
logger.info("Reading in geojson for the country")
|
||||
if not os.path.exists(
|
||||
self.DATA_PATH / "census" / "geojson" / "us.json"
|
||||
):
|
||||
logger.info("Fetching Census data from AWS S3")
|
||||
unzip_file_from_url(
|
||||
CENSUS_DATA_S3_URL,
|
||||
self.DATA_PATH / "tmp",
|
||||
self.DATA_PATH,
|
||||
)
|
||||
|
||||
geo_df = gpd.read_file(
|
||||
self.DATA_PATH / "census" / "geojson" / "us.json",
|
||||
)
|
||||
|
||||
df = self._merge_geojson(
|
||||
df=df,
|
||||
usa_geo_df=geo_df,
|
||||
)
|
||||
|
||||
# Rename some fields.
|
||||
df = df.rename(
|
||||
columns={
|
||||
self.MEDIAN_HOUSE_VALUE_FIELD: self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
||||
self.MEDIAN_INCOME_FIELD: self.MEDIAN_INCOME_FIELD_NAME,
|
||||
}
|
||||
self.TOTAL_POPULATION_FROM_AGE_TABLE: field_names.TOTAL_POP_FIELD,
|
||||
},
|
||||
errors="raise",
|
||||
)
|
||||
|
||||
# Handle null values for various fields, which are `-666666666`.
|
||||
|
@ -318,38 +483,101 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
# Calculate some demographic information.
|
||||
df[self.BLACK_FIELD_NAME] = df["B02001_003E"]
|
||||
df[self.AMERICAN_INDIAN_FIELD_NAME] = df["B02001_004E"]
|
||||
df[self.ASIAN_FIELD_NAME] = df["B02001_005E"]
|
||||
df[self.HAWAIIAN_FIELD_NAME] = df["B02001_006E"]
|
||||
df[self.TWO_OR_MORE_RACES_FIELD_NAME] = df["B02001_008E"]
|
||||
df[self.NON_HISPANIC_WHITE_FIELD_NAME] = df["B03002_003E"]
|
||||
df[self.HISPANIC_FIELD_NAME] = df["B03003_003E"]
|
||||
|
||||
# Calculate demographics as percent
|
||||
df[self.PERCENT_PREFIX + self.BLACK_FIELD_NAME] = (
|
||||
df["B02001_003E"] / df["B02001_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.AMERICAN_INDIAN_FIELD_NAME] = (
|
||||
df["B02001_004E"] / df["B02001_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.ASIAN_FIELD_NAME] = (
|
||||
df["B02001_005E"] / df["B02001_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.HAWAIIAN_FIELD_NAME] = (
|
||||
df["B02001_006E"] / df["B02001_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.TWO_OR_MORE_RACES_FIELD_NAME] = (
|
||||
df["B02001_008E"] / df["B02001_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.NON_HISPANIC_WHITE_FIELD_NAME] = (
|
||||
df["B03002_003E"] / df["B03002_001E"]
|
||||
)
|
||||
df[self.PERCENT_PREFIX + self.HISPANIC_FIELD_NAME] = (
|
||||
df["B03003_003E"] / df["B03003_001E"]
|
||||
df = df.rename(
|
||||
columns={
|
||||
"B02001_003E": self.BLACK_FIELD_NAME,
|
||||
"B02001_004E": self.AMERICAN_INDIAN_FIELD_NAME,
|
||||
"B02001_005E": self.ASIAN_FIELD_NAME,
|
||||
"B02001_006E": self.HAWAIIAN_FIELD_NAME,
|
||||
"B02001_008E": self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||
"B03002_003E": self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||
"B03003_003E": self.HISPANIC_FIELD_NAME,
|
||||
"B02001_007E": self.OTHER_RACE_FIELD_NAME,
|
||||
"B02001_001E": self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||
},
|
||||
errors="raise",
|
||||
)
|
||||
|
||||
# Calculate college attendance:
|
||||
for race_field_name in self.RE_OUTPUT_FIELDS:
|
||||
df[field_names.PERCENT_PREFIX + race_field_name] = (
|
||||
df[race_field_name] / df[self.TOTAL_RACE_POPULATION_FIELD_NAME]
|
||||
)
|
||||
|
||||
# First value is the `age bucket`, and the second value is a list of all fields
|
||||
# that will be summed in the calculations of the total population in that age
|
||||
# bucket.
|
||||
age_bucket_and_its_sum_columns = [
|
||||
(
|
||||
field_names.PERCENT_AGE_UNDER_10,
|
||||
[
|
||||
"B01001_003E", # Estimate!!Total:!!Male:!!Under 5 years
|
||||
"B01001_004E", # Estimate!!Total:!!Male:!!5 to 9 years
|
||||
"B01001_027E", # Estimate!!Total:!!Female:!!Under 5 years
|
||||
"B01001_028E", # Estimate!!Total:!!Female:!!5 to 9 years
|
||||
],
|
||||
),
|
||||
(
|
||||
field_names.PERCENT_AGE_10_TO_64,
|
||||
[
|
||||
"B01001_005E", # Estimate!!Total:!!Male:!!10 to 14 years
|
||||
"B01001_006E", # Estimate!!Total:!!Male:!!15 to 17 years
|
||||
"B01001_007E", # Estimate!!Total:!!Male:!!18 and 19 years
|
||||
"B01001_008E", # Estimate!!Total:!!Male:!!20 years
|
||||
"B01001_009E", # Estimate!!Total:!!Male:!!21 years
|
||||
"B01001_010E", # Estimate!!Total:!!Male:!!22 to 24 years
|
||||
"B01001_011E", # Estimate!!Total:!!Male:!!25 to 29 years
|
||||
"B01001_012E", # Estimate!!Total:!!Male:!!30 to 34 years
|
||||
"B01001_013E", # Estimate!!Total:!!Male:!!35 to 39 years
|
||||
"B01001_014E", # Estimate!!Total:!!Male:!!40 to 44 years
|
||||
"B01001_015E", # Estimate!!Total:!!Male:!!45 to 49 years
|
||||
"B01001_016E", # Estimate!!Total:!!Male:!!50 to 54 years
|
||||
"B01001_017E", # Estimate!!Total:!!Male:!!55 to 59 years
|
||||
"B01001_018E", # Estimate!!Total:!!Male:!!60 and 61 years
|
||||
"B01001_019E", # Estimate!!Total:!!Male:!!62 to 64 years
|
||||
"B01001_029E", # Estimate!!Total:!!Female:!!10 to 14 years
|
||||
"B01001_030E", # Estimate!!Total:!!Female:!!15 to 17 years
|
||||
"B01001_031E", # Estimate!!Total:!!Female:!!18 and 19 years
|
||||
"B01001_032E", # Estimate!!Total:!!Female:!!20 years
|
||||
"B01001_033E", # Estimate!!Total:!!Female:!!21 years
|
||||
"B01001_034E", # Estimate!!Total:!!Female:!!22 to 24 years
|
||||
"B01001_035E", # Estimate!!Total:!!Female:!!25 to 29 years
|
||||
"B01001_036E", # Estimate!!Total:!!Female:!!30 to 34 years
|
||||
"B01001_037E", # Estimate!!Total:!!Female:!!35 to 39 years
|
||||
"B01001_038E", # Estimate!!Total:!!Female:!!40 to 44 years
|
||||
"B01001_039E", # Estimate!!Total:!!Female:!!45 to 49 years
|
||||
"B01001_040E", # Estimate!!Total:!!Female:!!50 to 54 years
|
||||
"B01001_041E", # Estimate!!Total:!!Female:!!55 to 59 years
|
||||
"B01001_042E", # Estimate!!Total:!!Female:!!60 and 61 years
|
||||
"B01001_043E", # Estimate!!Total:!!Female:!!62 to 64 years
|
||||
],
|
||||
),
|
||||
(
|
||||
field_names.PERCENT_AGE_OVER_64,
|
||||
[
|
||||
"B01001_020E", # Estimate!!Total:!!Male:!!65 and 66 years
|
||||
"B01001_021E", # Estimate!!Total:!!Male:!!67 to 69 years
|
||||
"B01001_022E", # Estimate!!Total:!!Male:!!70 to 74 years
|
||||
"B01001_023E", # Estimate!!Total:!!Male:!!75 to 79 years
|
||||
"B01001_024E", # Estimate!!Total:!!Male:!!80 to 84 years
|
||||
"B01001_025E", # Estimate!!Total:!!Male:!!85 years and over
|
||||
"B01001_044E", # Estimate!!Total:!!Female:!!65 and 66 years
|
||||
"B01001_045E", # Estimate!!Total:!!Female:!!67 to 69 years
|
||||
"B01001_046E", # Estimate!!Total:!!Female:!!70 to 74 years
|
||||
"B01001_047E", # Estimate!!Total:!!Female:!!75 to 79 years
|
||||
"B01001_048E", # Estimate!!Total:!!Female:!!80 to 84 years
|
||||
"B01001_049E", # Estimate!!Total:!!Female:!!85 years and over
|
||||
],
|
||||
),
|
||||
]
|
||||
|
||||
# For each age bucket, sum the relevant columns and calculate the total
|
||||
# percentage.
|
||||
for age_bucket, sum_columns in age_bucket_and_its_sum_columns:
|
||||
df[age_bucket] = (
|
||||
df[sum_columns].sum(axis=1) / df[field_names.TOTAL_POP_FIELD]
|
||||
)
|
||||
|
||||
# Calculate college attendance and adjust low income
|
||||
df[self.COLLEGE_ATTENDANCE_FIELD] = (
|
||||
df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PUBLIC]
|
||||
+ df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PRIVATE]
|
||||
|
@ -361,26 +589,75 @@ class CensusACSETL(ExtractTransformLoad):
|
|||
1 - df[self.COLLEGE_ATTENDANCE_FIELD]
|
||||
)
|
||||
|
||||
# strip columns
|
||||
df = df[self.COLUMNS_TO_KEEP]
|
||||
|
||||
# Save results to self.
|
||||
self.df = df
|
||||
|
||||
# rename columns to be used in score
|
||||
rename_fields = {
|
||||
"Percent of individuals < 200% Federal Poverty Line": field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
||||
}
|
||||
self.df.rename(
|
||||
columns=rename_fields,
|
||||
inplace=True,
|
||||
errors="raise",
|
||||
# we impute income for both income measures
|
||||
## TODO: Convert to pydantic for clarity
|
||||
logger.info("Imputing income information")
|
||||
ImputeVariables = namedtuple(
|
||||
"ImputeVariables", ["raw_field_name", "imputed_field_name"]
|
||||
)
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving Census ACS Data")
|
||||
df = calculate_income_measures(
|
||||
impute_var_named_tup_list=[
|
||||
ImputeVariables(
|
||||
raw_field_name=self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||
imputed_field_name=self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||
),
|
||||
ImputeVariables(
|
||||
raw_field_name=self.COLLEGE_ATTENDANCE_FIELD,
|
||||
imputed_field_name=self.IMPUTED_COLLEGE_ATTENDANCE_FIELD,
|
||||
),
|
||||
],
|
||||
geo_df=df,
|
||||
geoid_field=self.GEOID_TRACT_FIELD_NAME,
|
||||
minimum_population_required_for_imputation=self.MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION,
|
||||
)
|
||||
|
||||
# mkdir census
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
logger.info("Calculating with imputed values")
|
||||
|
||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
||||
df[
|
||||
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME
|
||||
] = (
|
||||
df[self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME].fillna(
|
||||
df[self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME]
|
||||
)
|
||||
- df[self.COLLEGE_ATTENDANCE_FIELD].fillna(
|
||||
df[self.IMPUTED_COLLEGE_ATTENDANCE_FIELD]
|
||||
)
|
||||
# Use clip to ensure that the values are not negative if college attendance
|
||||
# is very high
|
||||
).clip(
|
||||
lower=0
|
||||
)
|
||||
|
||||
# All values should have a value at this point
|
||||
assert (
|
||||
# For tracts with >0 population
|
||||
df[
|
||||
df[field_names.TOTAL_POP_FIELD]
|
||||
>= self.MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION
|
||||
][
|
||||
# Then the imputed field should have no nulls
|
||||
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME
|
||||
]
|
||||
.isna()
|
||||
.sum()
|
||||
== 0
|
||||
), "Error: not all values were filled..."
|
||||
|
||||
logger.info("Renaming columns...")
|
||||
df = df.rename(
|
||||
columns={
|
||||
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME: field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
|
||||
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME: field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
||||
}
|
||||
)
|
||||
|
||||
# We generate a boolean that is TRUE when there is an imputed income but not a baseline income, and FALSE otherwise.
|
||||
# This allows us to see which tracts have an imputed income.
|
||||
df[field_names.IMPUTED_INCOME_FLAG_FIELD_NAME] = (
|
||||
df[field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD].notna()
|
||||
& df[field_names.POVERTY_LESS_THAN_200_FPL_FIELD].isna()
|
||||
)
|
||||
|
||||
# Save results to self.
|
||||
self.output_df = df
|
||||
|
|
|
@ -0,0 +1,166 @@
|
|||
from typing import Any
|
||||
from typing import List
|
||||
from typing import NamedTuple
|
||||
from typing import Tuple
|
||||
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
# pylint: disable=unsubscriptable-object
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
def _get_fips_mask(
|
||||
geo_df: gpd.GeoDataFrame,
|
||||
row: gpd.GeoSeries,
|
||||
fips_digits: int,
|
||||
geoid_field: str = "GEOID10_TRACT",
|
||||
) -> pd.Series:
|
||||
return (
|
||||
geo_df[geoid_field].str[:fips_digits] == row[geoid_field][:fips_digits]
|
||||
)
|
||||
|
||||
|
||||
def _get_neighbor_mask(
|
||||
geo_df: gpd.GeoDataFrame, row: gpd.GeoSeries
|
||||
) -> pd.Series:
|
||||
"""Returns neighboring tracts."""
|
||||
return geo_df["geometry"].touches(row["geometry"])
|
||||
|
||||
|
||||
def _choose_best_mask(
|
||||
geo_df: gpd.GeoDataFrame,
|
||||
masks_in_priority_order: List[pd.Series],
|
||||
column_to_impute: str,
|
||||
) -> pd.Series:
|
||||
for mask in masks_in_priority_order:
|
||||
if any(geo_df[mask][column_to_impute].notna()):
|
||||
return mask
|
||||
raise Exception("No mask found")
|
||||
|
||||
|
||||
def _prepare_dataframe_for_imputation(
|
||||
impute_var_named_tup_list: List[NamedTuple],
|
||||
geo_df: gpd.GeoDataFrame,
|
||||
population_field: str,
|
||||
minimum_population_required_for_imputation: int = 1,
|
||||
geoid_field: str = "GEOID10_TRACT",
|
||||
) -> Tuple[Any, gpd.GeoDataFrame]:
|
||||
"""Helper for imputation.
|
||||
|
||||
Given the inputs of `ImputeVariables`, returns list of tracts that need to be
|
||||
imputed, along with a GeoDataFrame that has a column with the imputed field
|
||||
"primed", meaning it is a copy of the raw field.
|
||||
|
||||
Will drop any rows with population less than
|
||||
`minimum_population_required_for_imputation`.
|
||||
"""
|
||||
imputing_cols = [
|
||||
impute_var_pair.raw_field_name
|
||||
for impute_var_pair in impute_var_named_tup_list
|
||||
]
|
||||
|
||||
# Prime column to exist
|
||||
for impute_var_pair in impute_var_named_tup_list:
|
||||
geo_df[impute_var_pair.imputed_field_name] = geo_df[
|
||||
impute_var_pair.raw_field_name
|
||||
].copy()
|
||||
|
||||
# Generate a list of tracts for which at least one of the imputation
|
||||
# columns is null that also meets population criteria.
|
||||
tract_list = geo_df[
|
||||
(
|
||||
# First, check whether any of the columns we want to impute contain null
|
||||
# values
|
||||
geo_df[imputing_cols].isna().any(axis=1)
|
||||
# Second, ensure population is not null and >= the minimum population
|
||||
& (
|
||||
geo_df[population_field].notnull()
|
||||
& (
|
||||
geo_df[population_field]
|
||||
>= minimum_population_required_for_imputation
|
||||
)
|
||||
)
|
||||
)
|
||||
][geoid_field].unique()
|
||||
|
||||
# Check that imputation is a valid choice for this set of fields
|
||||
logger.info(f"Imputing values for {len(tract_list)} unique tracts.")
|
||||
assert len(tract_list) > 0, "Error: No missing values to impute"
|
||||
|
||||
return tract_list, geo_df
|
||||
|
||||
|
||||
def calculate_income_measures(
|
||||
impute_var_named_tup_list: list,
|
||||
geo_df: gpd.GeoDataFrame,
|
||||
geoid_field: str,
|
||||
population_field: str = field_names.TOTAL_POP_FIELD,
|
||||
minimum_population_required_for_imputation: int = 1,
|
||||
) -> pd.DataFrame:
|
||||
"""Impute values based on geographic neighbors
|
||||
|
||||
We only want to check neighbors a single time, so all variables
|
||||
that we impute get imputed here.
|
||||
|
||||
Takes in:
|
||||
required:
|
||||
impute_var_named_tup_list: list of named tuples (imputed field, raw field)
|
||||
geo_df: geo dataframe that already has the census shapefiles merged
|
||||
geoid field: tract level ID
|
||||
|
||||
Returns: non-geometry pd.DataFrame
|
||||
"""
|
||||
# Determine where to impute variables and fill a column with nulls
|
||||
tract_list, geo_df = _prepare_dataframe_for_imputation(
|
||||
impute_var_named_tup_list=impute_var_named_tup_list,
|
||||
geo_df=geo_df,
|
||||
geoid_field=geoid_field,
|
||||
population_field=population_field,
|
||||
minimum_population_required_for_imputation=minimum_population_required_for_imputation,
|
||||
)
|
||||
|
||||
# Iterate through the dataframe to impute in place
|
||||
## TODO: We should probably convert this to a spatial join now that we are doing >1 imputation and it's taking a lot
|
||||
## of time, but thinking through how to do this while maintaining the masking will take some time. I think the best
|
||||
## way would be to (1) spatial join to all neighbors, and then (2) iterate to take the "smallest" set of neighbors...
|
||||
## but haven't implemented it yet.
|
||||
for index, row in geo_df.iterrows():
|
||||
if row[geoid_field] in tract_list:
|
||||
neighbor_mask = _get_neighbor_mask(geo_df, row)
|
||||
county_mask = _get_fips_mask(
|
||||
geo_df=geo_df, row=row, fips_digits=5, geoid_field=geoid_field
|
||||
)
|
||||
## TODO: Did CEQ decide to cut this?
|
||||
state_mask = _get_fips_mask(
|
||||
geo_df=geo_df, row=row, fips_digits=2, geoid_field=geoid_field
|
||||
)
|
||||
|
||||
# Impute fields for every row missing at least one value using the best possible set of neighbors
|
||||
# Note that later, we will pull raw.fillna(imputed), so the mechanics of this step aren't critical
|
||||
for impute_var_pair in impute_var_named_tup_list:
|
||||
mask_to_use = _choose_best_mask(
|
||||
geo_df=geo_df,
|
||||
masks_in_priority_order=[
|
||||
neighbor_mask,
|
||||
county_mask,
|
||||
state_mask,
|
||||
],
|
||||
column_to_impute=impute_var_pair.raw_field_name,
|
||||
)
|
||||
|
||||
geo_df.loc[index, impute_var_pair.imputed_field_name] = geo_df[
|
||||
mask_to_use
|
||||
][impute_var_pair.raw_field_name].mean()
|
||||
|
||||
logger.info("Casting geodataframe as a typical dataframe")
|
||||
# get rid of the geometry column and cast as a typical df
|
||||
df = pd.DataFrame(
|
||||
geo_df[[col for col in geo_df.columns if col != "geometry"]]
|
||||
)
|
||||
|
||||
# finally, return the df
|
||||
return df
|
|
@ -1,9 +1,9 @@
|
|||
import os
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
import censusdata
|
||||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
||||
retrieve_census_acs_data,
|
||||
)
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import requests
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.utils import unzip_file_from_url, download_file_from_url
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import download_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -282,12 +283,20 @@ class CensusACSMedianIncomeETL(ExtractTransformLoad):
|
|||
|
||||
# Download MSA median incomes
|
||||
logger.info("Starting download of MSA median incomes.")
|
||||
download = requests.get(self.MSA_MEDIAN_INCOME_URL, verify=None)
|
||||
download = requests.get(
|
||||
self.MSA_MEDIAN_INCOME_URL,
|
||||
verify=None,
|
||||
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||
)
|
||||
self.msa_median_incomes = json.loads(download.content)
|
||||
|
||||
# Download state median incomes
|
||||
logger.info("Starting download of state median incomes.")
|
||||
download_state = requests.get(self.STATE_MEDIAN_INCOME_URL, verify=None)
|
||||
download_state = requests.get(
|
||||
self.STATE_MEDIAN_INCOME_URL,
|
||||
verify=None,
|
||||
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||
)
|
||||
self.state_median_incomes = json.loads(download_state.content)
|
||||
## NOTE we already have PR's MI here
|
||||
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
import json
|
||||
import requests
|
||||
from typing import List
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
import requests
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
pd.options.mode.chained_assignment = "raise"
|
||||
|
||||
|
@ -146,6 +147,63 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
||||
)
|
||||
|
||||
# Race/Ethnicity fields
|
||||
self.TOTAL_RACE_POPULATION_FIELD = "PCT086001" # Total
|
||||
self.ASIAN_FIELD = "PCT086002" # Total!!Asian
|
||||
self.BLACK_FIELD = "PCT086003" # Total!!Black or African American
|
||||
self.HAWAIIAN_FIELD = (
|
||||
"PCT086004" # Total!!Native Hawaiian and Other Pacific Islander
|
||||
)
|
||||
# Note that the 2010 census for island araeas does not break out
|
||||
# hispanic and non-hispanic white, so this is slightly different from
|
||||
# our other demographic data
|
||||
self.NON_HISPANIC_WHITE_FIELD = "PCT086005" # Total!!White
|
||||
self.HISPANIC_FIELD = "PCT086006" # Total!!Hispanic or Latino
|
||||
self.OTHER_RACE_FIELD = "PCT086007" # Total!!Other Ethnic Origin or Ra
|
||||
|
||||
self.TOTAL_RACE_POPULATION_VI_FIELD = "P003001" # Total
|
||||
self.BLACK_VI_FIELD = (
|
||||
"P003003" # Total!!One race!!Black or African American alone
|
||||
)
|
||||
self.AMERICAN_INDIAN_VI_FIELD = "P003005" # Total!!One race!!American Indian and Alaska Native alone
|
||||
self.ASIAN_VI_FIELD = "P003006" # Total!!One race!!Asian alone
|
||||
self.HAWAIIAN_VI_FIELD = "P003007" # Total!!One race!!Native Hawaiian and Other Pacific Islander alone
|
||||
self.TWO_OR_MORE_RACES_VI_FIELD = "P003009" # Total!!Two or More Races
|
||||
self.NON_HISPANIC_WHITE_VI_FIELD = (
|
||||
"P005006" # Total!!Not Hispanic or Latino!!One race!!White alone
|
||||
)
|
||||
self.HISPANIC_VI_FIELD = "P005002" # Total!!Hispanic or Latino
|
||||
self.OTHER_RACE_VI_FIELD = (
|
||||
"P003008" # Total!!One race!!Some Other Race alone
|
||||
)
|
||||
self.TOTAL_RACE_POPULATION_VI_FIELD = "P003001" # Total
|
||||
|
||||
self.TOTAL_RACE_POPULATION_FIELD_NAME = (
|
||||
"Total population surveyed on racial data"
|
||||
)
|
||||
self.BLACK_FIELD_NAME = "Black or African American"
|
||||
self.AMERICAN_INDIAN_FIELD_NAME = "American Indian / Alaska Native"
|
||||
self.ASIAN_FIELD_NAME = "Asian"
|
||||
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian or Pacific"
|
||||
self.TWO_OR_MORE_RACES_FIELD_NAME = "two or more races"
|
||||
self.NON_HISPANIC_WHITE_FIELD_NAME = "White"
|
||||
self.HISPANIC_FIELD_NAME = "Hispanic or Latino"
|
||||
# Note that `other` is lowercase because the whole field will show up in the download
|
||||
# file as "Percent other races"
|
||||
self.OTHER_RACE_FIELD_NAME = "other races"
|
||||
|
||||
# Name output demographics fields.
|
||||
self.RE_OUTPUT_FIELDS = [
|
||||
self.BLACK_FIELD_NAME,
|
||||
self.AMERICAN_INDIAN_FIELD_NAME,
|
||||
self.ASIAN_FIELD_NAME,
|
||||
self.HAWAIIAN_FIELD_NAME,
|
||||
self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||
self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||
self.HISPANIC_FIELD_NAME,
|
||||
self.OTHER_RACE_FIELD_NAME,
|
||||
]
|
||||
|
||||
var_list = [
|
||||
self.MEDIAN_INCOME_FIELD,
|
||||
self.TOTAL_HOUSEHOLD_RATIO_INCOME_TO_POVERTY_LEVEL_FIELD,
|
||||
|
@ -161,6 +219,13 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
||||
self.TOTAL_POP_FIELD,
|
||||
self.TOTAL_RACE_POPULATION_FIELD,
|
||||
self.ASIAN_FIELD,
|
||||
self.BLACK_FIELD,
|
||||
self.HAWAIIAN_FIELD,
|
||||
self.NON_HISPANIC_WHITE_FIELD,
|
||||
self.HISPANIC_FIELD,
|
||||
self.OTHER_RACE_FIELD,
|
||||
]
|
||||
var_list = ",".join(var_list)
|
||||
|
||||
|
@ -179,6 +244,15 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_VI_FIELD,
|
||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_VI_FIELD,
|
||||
self.TOTAL_POP_VI_FIELD,
|
||||
self.BLACK_VI_FIELD,
|
||||
self.AMERICAN_INDIAN_VI_FIELD,
|
||||
self.ASIAN_VI_FIELD,
|
||||
self.HAWAIIAN_VI_FIELD,
|
||||
self.TWO_OR_MORE_RACES_VI_FIELD,
|
||||
self.NON_HISPANIC_WHITE_VI_FIELD,
|
||||
self.HISPANIC_VI_FIELD,
|
||||
self.OTHER_RACE_VI_FIELD,
|
||||
self.TOTAL_RACE_POPULATION_VI_FIELD,
|
||||
]
|
||||
var_list_vi = ",".join(var_list_vi)
|
||||
|
||||
|
@ -209,6 +283,23 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD,
|
||||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD: self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
||||
self.TOTAL_RACE_POPULATION_FIELD: self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||
self.TOTAL_RACE_POPULATION_VI_FIELD: self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||
# Note there is no American Indian data for AS/GU/MI
|
||||
self.AMERICAN_INDIAN_VI_FIELD: self.AMERICAN_INDIAN_FIELD_NAME,
|
||||
self.ASIAN_FIELD: self.ASIAN_FIELD_NAME,
|
||||
self.ASIAN_VI_FIELD: self.ASIAN_FIELD_NAME,
|
||||
self.BLACK_FIELD: self.BLACK_FIELD_NAME,
|
||||
self.BLACK_VI_FIELD: self.BLACK_FIELD_NAME,
|
||||
self.HAWAIIAN_FIELD: self.HAWAIIAN_FIELD_NAME,
|
||||
self.HAWAIIAN_VI_FIELD: self.HAWAIIAN_FIELD_NAME,
|
||||
self.TWO_OR_MORE_RACES_VI_FIELD: self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||
self.NON_HISPANIC_WHITE_FIELD: self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||
self.NON_HISPANIC_WHITE_VI_FIELD: self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||
self.HISPANIC_FIELD: self.HISPANIC_FIELD_NAME,
|
||||
self.HISPANIC_VI_FIELD: self.HISPANIC_FIELD_NAME,
|
||||
self.OTHER_RACE_FIELD: self.OTHER_RACE_FIELD_NAME,
|
||||
self.OTHER_RACE_VI_FIELD: self.OTHER_RACE_FIELD_NAME,
|
||||
}
|
||||
|
||||
# To do: Ask Census Slack Group about whether you need to hardcode the county fips
|
||||
|
@ -251,6 +342,8 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
+ "&for=tract:*&in=state:{}%20county:{}"
|
||||
)
|
||||
|
||||
self.final_race_fields: List[str] = []
|
||||
|
||||
self.df: pd.DataFrame
|
||||
self.df_vi: pd.DataFrame
|
||||
self.df_all: pd.DataFrame
|
||||
|
@ -263,14 +356,17 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
f"Downloading data for state/territory {island['state_abbreviation']}"
|
||||
)
|
||||
for county in island["county_fips"]:
|
||||
api_url = self.API_URL.format(
|
||||
self.DECENNIAL_YEAR,
|
||||
island["state_abbreviation"],
|
||||
island["var_list"],
|
||||
island["fips"],
|
||||
county,
|
||||
)
|
||||
logger.debug(f"CENSUS: Requesting {api_url}")
|
||||
download = requests.get(
|
||||
self.API_URL.format(
|
||||
self.DECENNIAL_YEAR,
|
||||
island["state_abbreviation"],
|
||||
island["var_list"],
|
||||
island["fips"],
|
||||
county,
|
||||
)
|
||||
api_url,
|
||||
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||
)
|
||||
|
||||
df = json.loads(download.content)
|
||||
|
@ -377,6 +473,19 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
self.df_all["state"] + self.df_all["county"] + self.df_all["tract"]
|
||||
)
|
||||
|
||||
# Calculate stats by race
|
||||
for race_field_name in self.RE_OUTPUT_FIELDS:
|
||||
output_field_name = (
|
||||
field_names.PERCENT_PREFIX
|
||||
+ race_field_name
|
||||
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX
|
||||
)
|
||||
self.final_race_fields.append(output_field_name)
|
||||
self.df_all[output_field_name] = (
|
||||
self.df_all[race_field_name]
|
||||
/ self.df_all[self.TOTAL_RACE_POPULATION_FIELD_NAME]
|
||||
)
|
||||
|
||||
# Reporting Missing Values
|
||||
for col in self.df_all.columns:
|
||||
missing_value_count = self.df_all[col].isnull().sum()
|
||||
|
@ -400,7 +509,7 @@ class CensusDecennialETL(ExtractTransformLoad):
|
|||
self.PERCENTAGE_HOUSEHOLDS_BELOW_200_PERC_POVERTY_LEVEL_FIELD_NAME,
|
||||
self.PERCENTAGE_HIGH_SCHOOL_ED_FIELD_NAME,
|
||||
self.UNEMPLOYMENT_FIELD_NAME,
|
||||
]
|
||||
] + self.final_race_fields
|
||||
|
||||
self.df_all[columns_to_include].to_csv(
|
||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -21,14 +22,35 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
|||
Full technical documents: https://www.diversitydatakids.org/sites/default/files/2020-02/ddk_coi2.0_technical_documentation_20200212.pdf.
|
||||
|
||||
Github repo: https://github.com/diversitydatakids/COI/
|
||||
|
||||
"""
|
||||
|
||||
# Metadata for the baseclass
|
||||
NAME = "child_opportunity_index"
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
# Define these for easy code completion
|
||||
EXTREME_HEAT_FIELD: str
|
||||
HEALTHY_FOOD_FIELD: str
|
||||
IMPENETRABLE_SURFACES_FIELD: str
|
||||
READING_FIELD: str
|
||||
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
|
||||
def __init__(self):
|
||||
self.COI_FILE_URL = (
|
||||
"https://data.diversitydatakids.org/datastore/zip/f16fff12-b1e5-4f60-85d3-"
|
||||
"3a0ededa30a0?format=csv"
|
||||
)
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.SOURCE_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"child_opportunity_index/raw.zip"
|
||||
)
|
||||
else:
|
||||
self.SOURCE_URL = (
|
||||
"https://data.diversitydatakids.org/datastore/zip/f16fff12-b1e5-4f60-85d3-"
|
||||
"3a0ededa30a0?format=csv"
|
||||
)
|
||||
|
||||
# TODO: Decide about nixing this
|
||||
self.TRACT_INPUT_COLUMN_NAME = self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||
|
||||
self.OUTPUT_PATH: Path = (
|
||||
self.DATA_PATH / "dataset" / "child_opportunity_index"
|
||||
|
@ -40,31 +62,19 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
|||
self.IMPENETRABLE_SURFACES_INPUT_FIELD = "HE_GREEN"
|
||||
self.READING_INPUT_FIELD = "ED_READING"
|
||||
|
||||
# Constants for output
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.EXTREME_HEAT_FIELD,
|
||||
field_names.HEALTHY_FOOD_FIELD,
|
||||
field_names.IMPENETRABLE_SURFACES_FIELD,
|
||||
field_names.READING_FIELD,
|
||||
]
|
||||
|
||||
self.raw_df: pd.DataFrame
|
||||
self.output_df: pd.DataFrame
|
||||
|
||||
def extract(self) -> None:
|
||||
logger.info("Starting 51MB data download.")
|
||||
|
||||
unzip_file_from_url(
|
||||
file_url=self.COI_FILE_URL,
|
||||
download_path=self.get_tmp_path(),
|
||||
unzipped_file_path=self.get_tmp_path() / "child_opportunity_index",
|
||||
super().extract(
|
||||
source_url=self.SOURCE_URL,
|
||||
extract_path=self.get_tmp_path(),
|
||||
)
|
||||
|
||||
self.raw_df = pd.read_csv(
|
||||
filepath_or_buffer=self.get_tmp_path()
|
||||
/ "child_opportunity_index"
|
||||
/ "raw.csv",
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting transforms.")
|
||||
raw_df = pd.read_csv(
|
||||
filepath_or_buffer=self.get_tmp_path() / "raw.csv",
|
||||
# The following need to remain as strings for all of their digits, not get
|
||||
# converted to numbers.
|
||||
dtype={
|
||||
|
@ -73,16 +83,13 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
|||
low_memory=False,
|
||||
)
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting transforms.")
|
||||
|
||||
output_df = self.raw_df.rename(
|
||||
output_df = raw_df.rename(
|
||||
columns={
|
||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
self.EXTREME_HEAT_INPUT_FIELD: field_names.EXTREME_HEAT_FIELD,
|
||||
self.HEALTHY_FOOD_INPUT_FIELD: field_names.HEALTHY_FOOD_FIELD,
|
||||
self.IMPENETRABLE_SURFACES_INPUT_FIELD: field_names.IMPENETRABLE_SURFACES_FIELD,
|
||||
self.READING_INPUT_FIELD: field_names.READING_FIELD,
|
||||
self.EXTREME_HEAT_INPUT_FIELD: self.EXTREME_HEAT_FIELD,
|
||||
self.HEALTHY_FOOD_INPUT_FIELD: self.HEALTHY_FOOD_FIELD,
|
||||
self.IMPENETRABLE_SURFACES_INPUT_FIELD: self.IMPENETRABLE_SURFACES_FIELD,
|
||||
self.READING_INPUT_FIELD: self.READING_FIELD,
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -95,8 +102,8 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
|||
|
||||
# Convert percents from 0-100 to 0-1 to standardize with our other fields.
|
||||
percent_fields_to_convert = [
|
||||
field_names.HEALTHY_FOOD_FIELD,
|
||||
field_names.IMPENETRABLE_SURFACES_FIELD,
|
||||
self.HEALTHY_FOOD_FIELD,
|
||||
self.IMPENETRABLE_SURFACES_FIELD,
|
||||
]
|
||||
|
||||
for percent_field_to_convert in percent_fields_to_convert:
|
||||
|
@ -105,11 +112,3 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
self.output_df = output_df
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving CSV")
|
||||
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
self.output_df[self.COLUMNS_TO_KEEP].to_csv(
|
||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
||||
)
|
||||
|
|
|
@ -1,64 +1,51 @@
|
|||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class DOEEnergyBurden(ExtractTransformLoad):
|
||||
def __init__(self):
|
||||
self.DOE_FILE_URL = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/DOE_LEAD_AMI_TRACT_2018_ALL.csv.zip"
|
||||
)
|
||||
NAME = "doe_energy_burden"
|
||||
SOURCE_URL: str = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/DOE_LEAD_AMI_TRACT_2018_ALL.csv.zip"
|
||||
)
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
REVISED_ENERGY_BURDEN_FIELD_NAME: str
|
||||
|
||||
def __init__(self):
|
||||
self.OUTPUT_PATH: Path = (
|
||||
self.DATA_PATH / "dataset" / "doe_energy_burden"
|
||||
)
|
||||
|
||||
self.TRACT_INPUT_COLUMN_NAME = "FIP"
|
||||
self.INPUT_ENERGY_BURDEN_FIELD_NAME = "BURDEN"
|
||||
self.REVISED_ENERGY_BURDEN_FIELD_NAME = "Energy burden"
|
||||
|
||||
# Constants for output
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.REVISED_ENERGY_BURDEN_FIELD_NAME,
|
||||
]
|
||||
|
||||
self.raw_df: pd.DataFrame
|
||||
self.output_df: pd.DataFrame
|
||||
|
||||
def extract(self) -> None:
|
||||
logger.info("Starting data download.")
|
||||
|
||||
unzip_file_from_url(
|
||||
file_url=self.DOE_FILE_URL,
|
||||
download_path=self.get_tmp_path(),
|
||||
unzipped_file_path=self.get_tmp_path() / "doe_energy_burden",
|
||||
)
|
||||
|
||||
self.raw_df = pd.read_csv(
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting DOE Energy Burden transforms.")
|
||||
raw_df: pd.DataFrame = pd.read_csv(
|
||||
filepath_or_buffer=self.get_tmp_path()
|
||||
/ "doe_energy_burden"
|
||||
/ "DOE_LEAD_AMI_TRACT_2018_ALL.csv",
|
||||
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
||||
dtype={
|
||||
self.TRACT_INPUT_COLUMN_NAME: "string",
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME: "string",
|
||||
},
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting transforms.")
|
||||
|
||||
output_df = self.raw_df.rename(
|
||||
logger.info("Renaming columns and ensuring output format is correct")
|
||||
output_df = raw_df.rename(
|
||||
columns={
|
||||
self.INPUT_ENERGY_BURDEN_FIELD_NAME: self.REVISED_ENERGY_BURDEN_FIELD_NAME,
|
||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -71,11 +58,3 @@ class DOEEnergyBurden(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
self.output_df = output_df
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving DOE Energy Burden CSV")
|
||||
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
self.output_df[self.COLUMNS_TO_KEEP].to_csv(
|
||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
||||
)
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
# DOT travel barriers
|
||||
|
||||
The below description is taken from DOT directly:
|
||||
|
||||
Consistent with OMB’s Interim Guidance for the Justice40 Initiative, DOT’s interim definition of DACs includes (a) certain qualifying census tracts, (b) any Tribal land, or (c) any territory or possession of the United States. DOT has provided a mapping tool to assist applicants in identifying whether a project is located in a Disadvantaged Community, available at Transportation Disadvantaged Census Tracts (arcgis.com). A shapefile of the geospatial data is available Transportation Disadvantaged Census Tracts shapefile (version 2 .0, posted 5/10/22).
|
||||
|
||||
The DOT interim definition for DACs was developed by an internal and external collaborative research process (see recordings from November 2021 public meetings). It includes data for 22 indicators collected at the census tract level and grouped into six (6) categories of transportation disadvantage. The numbers in parenthesis show how many indicators fall in that category:
|
||||
|
||||
- Transportation access disadvantage identifies communities and places that spend more, and take longer, to get where they need to go. (4)
|
||||
- Health disadvantage identifies communities based on variables associated with adverse health outcomes, disability, as well as environmental exposures. (3)
|
||||
- Environmental disadvantage identifies communities with disproportionately high levels of certain air pollutants and high potential presence of lead-based paint in housing units. (6)
|
||||
- Economic disadvantage identifies areas and populations with high poverty, low wealth, lack of local jobs, low homeownership, low educational attainment, and high inequality. (7)
|
||||
Resilience disadvantage identifies communities vulnerable to hazards caused by climate change. (1)
|
||||
- Equity disadvantage identifies communities with a with a high percentile of persons (age 5+) who speak English "less than well." (1)
|
||||
|
||||
The CEJST uses only Transportation Access Disadvantage.
|
|
@ -0,0 +1,69 @@
|
|||
# pylint: disable=unsubscriptable-object
|
||||
# pylint: disable=unsupported-assignment-operation
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class TravelCompositeETL(ExtractTransformLoad):
|
||||
"""ETL class for the DOT Travel Disadvantage Dataset"""
|
||||
|
||||
NAME = "travel_composite"
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
SOURCE_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"dot_travel_composite/Shapefile_and_Metadata.zip"
|
||||
)
|
||||
else:
|
||||
SOURCE_URL = "https://www.transportation.gov/sites/dot.gov/files/Shapefile_and_Metadata.zip"
|
||||
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
# Output score variables (values set on datasets.yml) for linting purposes
|
||||
TRAVEL_BURDEN_FIELD_NAME: str
|
||||
|
||||
def __init__(self):
|
||||
# define the full path for the input CSV file
|
||||
self.INPUT_SHP = (
|
||||
self.get_tmp_path() / "DOT_Disadvantage_Layer_Final_April2022.shp"
|
||||
)
|
||||
|
||||
# this is the main dataframe
|
||||
self.df: pd.DataFrame
|
||||
|
||||
# Start dataset-specific vars here
|
||||
## Average of Transportation Indicator Percentiles (calculated)
|
||||
## Calculated: Average of (EPL_TCB+EPL_NWKI+EPL_NOVEH+EPL_COMMUTE) excluding NULLS
|
||||
## See metadata for more information
|
||||
self.INPUT_TRAVEL_DISADVANTAGE_FIELD_NAME = "Transp_TH"
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME = "FIPS"
|
||||
|
||||
def transform(self) -> None:
|
||||
"""Reads the unzipped data file into memory and applies the following
|
||||
transformations to prepare it for the load() method:
|
||||
|
||||
- Renames the Census Tract column to match the other datasets
|
||||
- Converts to CSV
|
||||
"""
|
||||
logger.info("Transforming DOT Travel Disadvantage Data")
|
||||
|
||||
# read in the unzipped shapefile from data source
|
||||
# reformat it to be standard df, remove unassigned rows, and
|
||||
# then rename the Census Tract column for merging
|
||||
df_dot: pd.DataFrame = gpd.read_file(self.INPUT_SHP)
|
||||
df_dot = df_dot.rename(
|
||||
columns={
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
self.INPUT_TRAVEL_DISADVANTAGE_FIELD_NAME: self.TRAVEL_BURDEN_FIELD_NAME,
|
||||
}
|
||||
).dropna(subset=[self.GEOID_TRACT_FIELD_NAME])
|
||||
# Assign the final df to the class' output_df for the load method
|
||||
self.output_df = df_dot
|
|
@ -0,0 +1,40 @@
|
|||
The following is the description from eAMLIS as of August 16, 2022.
|
||||
---
|
||||
|
||||
e-AMLIS is not a comprehensive database of all AML features or all AML grant activities. e-AMLIS is a national inventory that provides information about known abandoned mine land (AML) features including polluted waters. The majority of the data in e-AMLIS provides information about known coal AML features for the 25 states and 3 tribal SMCRA-approved AML Programs. e-AMLIS also provides limited information on non-coal AML features, and, non-coal reclamation projects as well as AML features for states and tribes that do not have an approved AML Program. Additionally, e-AMLIS only accounts for the direct construction cost to reclaim each AML feature that has been identified by states and Tribes. Other project costs such as planning, design, permitting, and construction oversight are not tracked in e-AMLIS.
|
||||
|
||||
The figures in e-AMLIS are further broken down into 3 cost categories:
|
||||
|
||||
Unfunded Cost represents pre-construction estimates to reclaim the AML feature;
|
||||
Funded Cost indicates that construction has been approved by OSM and these figures may change during construction;
|
||||
Completed Cost is the actual cost to complete construction and reclamation of the AML feature.
|
||||
DOI/OSMRE’s Financial Business & Management System is the system of record to obtain comprehensive information about all AML grant expenditures.
|
||||
|
||||
An inventory of land and water impacted by past mining (primarily coal mining) is maintained by OSMRE to provide information needed to implement the Surface Mining Control and Reclamation Act of 1977 (SMCRA). The inventory contains information on the location, type, and extent of AML impacts, as well as, information on the cost associated with the reclamation of those problems. The inventory is based upon field surveys by State, Tribal, and OSMRE program officials. It is dynamic to the extent that it is modified as new problems are identified and existing problems are reclaimed.
|
||||
|
||||
The Abandoned Mine Land Reclamation Act (AMRA) of 1990, amended SMCRA. The amended law expanded the scope of data OSMRE must collect regarding AML reclamation programs and progress. On December 20, 2006, SMCRA was amended under the Tax Relief and Health Care Act of 2006 to add sources of program funding, emphasize high priority coal reclamation, and expand OSMRE’s responsibilities towards implementation and management of the AML Inventory.
|
||||
|
||||
WHO MAINTAINS THE INFORMATION IN THE AML INVENTORY?
|
||||
The information is developed and/or updated by the States and Indian Tribes managing their own AML programs under SMCRA or by the OSMRE office responsible for States and Indian Tribes not managing their own AML problems.
|
||||
|
||||
TYPES OF PROBLEMS
|
||||
"High Priority"
|
||||
The most serious AML problems are those posing a threat to health, safety and general welfare of people (Priority 1 and Priority 2, or "high priority"). These are the only problems which the law requires to be inventoried. There are 17 Priority 1 and 2 problem types.
|
||||
|
||||
Emergencies
|
||||
Under the 2006 amendments to SMCRA, AML grants to states and tribes increased from $145 million in FY 2007 to $395 million in FY 2011. The increase in funding allowed states to take responsibility for their AML emergencies as part of their regular AML programs.
|
||||
|
||||
Until FY 2011, OSMRE provided Abandoned Mine Land (AML) State Emergency grants to the 15 states that manage their own emergency programs under the Abandoned Mine Land Reclamation Program. Thirteen other states and tribes that had approved AML programs did not receive emergency grants. OSMRE managed emergencies in those 13 states and tribes as well as in Federal Program States without AML programs.
|
||||
|
||||
OSMRE officially notified the state and tribal officials and Congressional delegations that, starting on October 1, 2010, they would fully assume responsibility for funding their emergency programs. OSMRE then worked with states and tribes to ensure a smooth transition to the states’ assumption of responsibility for administering state emergency programs. New funding and carryover balances were used during the transition to address immediate needs.
|
||||
|
||||
Overall, OSMRE successfully transitioned the financial responsibility to the states in FY 2011, and continues to provide technical and program assistance when needed. States with AML programs are now in a position to effectively handle emergency programs.
|
||||
|
||||
Environmental
|
||||
AML problems impacting the environment are known as Priority 3 problems. While SMCRA does not require OSMRE to inventory every unreclaimed priority 3 problem, some program States and Indian tribes have chosen to submit such information. Information for priority 3 problem types is required when reclamation activities are funded and information on completed reclamation of priority 3 problems is kept in the inventory.
|
||||
|
||||
Other Coal Mine Related Problems
|
||||
Information is also kept on lower priority coal related AML problems such as lower priority coal-related projects involving public facilities, and the development of publicly-owned land. The lower priority problems are also categorized-- Priority 4 and 5 problem types.
|
||||
|
||||
Non-coal Mine Related AML Problems
|
||||
The non-coal problems are primarily problems reclaimed by States/Indian tribes that had "Certified" having addressed all known eligible coal related problems. States and Indian tribes managing their own AML programs reclaimed non-coal problems prior to addressing all their coal related problems under SMCRA SEC. 409-- FILLING VOIDS AND SEALING TUNNELS at the request of the Governor of the state or the governing body of the Indian tribe if the Secretary of the Department of the Interior determines such problems meet the criteria for a priority 1, extreme hazard, problems. This Program Area contains historical reclamation accomplishments for Certified Programs reclaiming Priority 1, 2, and 3 non-coal Problem Type features with pre-AML Reauthorization SMCRA funds distributed prior to October 1, 2007.
|
81
data/data-pipeline/data_pipeline/etl/sources/eamlis/etl.py
Normal file
81
data/data-pipeline/data_pipeline/etl/sources/eamlis/etl.py
Normal file
|
@ -0,0 +1,81 @@
|
|||
from pathlib import Path
|
||||
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.etl.sources.geo_utils import add_tracts_for_geometries
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class AbandonedMineETL(ExtractTransformLoad):
|
||||
"""Data from Office Of Surface Mining Reclamation and Enforcement's
|
||||
eAMLIS. These are the locations of abandoned mines.
|
||||
"""
|
||||
|
||||
# Metadata for the baseclass
|
||||
NAME = "eamlis"
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
AML_BOOLEAN: str
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
EXPECTED_MISSING_STATES = [
|
||||
"10",
|
||||
"11",
|
||||
"12",
|
||||
"15",
|
||||
"23",
|
||||
"27",
|
||||
"31",
|
||||
"33",
|
||||
"34",
|
||||
"36",
|
||||
"45",
|
||||
"50",
|
||||
"55",
|
||||
]
|
||||
|
||||
# Define these for easy code completion
|
||||
def __init__(self):
|
||||
self.SOURCE_URL = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/eAMLIS export of all data.tsv.zip"
|
||||
)
|
||||
|
||||
self.TRACT_INPUT_COLUMN_NAME = self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||
|
||||
self.OUTPUT_PATH: Path = (
|
||||
self.DATA_PATH / "dataset" / "abandoned_mine_land_inventory_system"
|
||||
)
|
||||
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.AML_BOOLEAN,
|
||||
]
|
||||
|
||||
self.output_df: pd.DataFrame
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting eAMLIS transforms.")
|
||||
df = pd.read_csv(
|
||||
self.get_tmp_path() / "eAMLIS export of all data.tsv",
|
||||
sep="\t",
|
||||
low_memory=False,
|
||||
)
|
||||
gdf = gpd.GeoDataFrame(
|
||||
df,
|
||||
geometry=gpd.points_from_xy(
|
||||
x=df["Longitude"],
|
||||
y=df["Latitude"],
|
||||
),
|
||||
crs="epsg:4326",
|
||||
)
|
||||
gdf = gdf.drop_duplicates(subset=["geometry"], keep="last")
|
||||
gdf_tracts = add_tracts_for_geometries(gdf)
|
||||
gdf_tracts = gdf_tracts.drop_duplicates(self.GEOID_TRACT_FIELD_NAME)
|
||||
gdf_tracts[self.AML_BOOLEAN] = True
|
||||
self.output_df = gdf_tracts[self.COLUMNS_TO_KEEP]
|
|
@ -1,6 +1,6 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
|
@ -8,21 +8,22 @@ logger = get_module_logger(__name__)
|
|||
|
||||
|
||||
class EJSCREENETL(ExtractTransformLoad):
|
||||
"""Load EJSCREEN data.
|
||||
"""Load updated EJSCREEN data."""
|
||||
|
||||
Data dictionary:
|
||||
https://gaftp.epa.gov/EJSCREEN/2019/2019_EJSCREEN_columns_explained.csv
|
||||
"""
|
||||
NAME = "ejscreen"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
INPUT_GEOID_TRACT_FIELD_NAME: str = "ID"
|
||||
|
||||
def __init__(self):
|
||||
self.EJSCREEN_FTP_URL = "https://edap-arcgiscloud-data-commons.s3.amazonaws.com/EJSCREEN2020/EJSCREEN_Tract_2020_USPR.csv.zip"
|
||||
self.EJSCREEN_CSV = self.get_tmp_path() / "EJSCREEN_Tract_2020_USPR.csv"
|
||||
self.CSV_PATH = self.DATA_PATH / "dataset" / "ejscreen_2019"
|
||||
self.EJSCREEN_FTP_URL = "https://gaftp.epa.gov/EJSCREEN/2021/EJSCREEN_2021_USPR_Tracts.csv.zip"
|
||||
self.EJSCREEN_CSV = (
|
||||
self.get_tmp_path() / "EJSCREEN_2021_USPR_Tracts.csv"
|
||||
)
|
||||
self.CSV_PATH = self.DATA_PATH / "dataset" / "ejscreen"
|
||||
self.df: pd.DataFrame
|
||||
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.TOTAL_POP_FIELD,
|
||||
# pylint: disable=duplicate-code
|
||||
field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
||||
field_names.RESPIRATORY_HAZARD_FIELD,
|
||||
|
@ -39,6 +40,7 @@ class EJSCREENETL(ExtractTransformLoad):
|
|||
field_names.OVER_64_FIELD,
|
||||
field_names.UNDER_5_FIELD,
|
||||
field_names.LEAD_PAINT_FIELD,
|
||||
field_names.UST_FIELD,
|
||||
]
|
||||
|
||||
def extract(self) -> None:
|
||||
|
@ -53,19 +55,16 @@ class EJSCREENETL(ExtractTransformLoad):
|
|||
logger.info("Transforming EJScreen Data")
|
||||
self.df = pd.read_csv(
|
||||
self.EJSCREEN_CSV,
|
||||
dtype={"ID": "string"},
|
||||
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||
# EJSCREEN writes the word "None" for NA data.
|
||||
na_values=["None"],
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
# rename ID to Tract ID
|
||||
self.df.rename(
|
||||
self.output_df = self.df.rename(
|
||||
columns={
|
||||
"ID": self.GEOID_TRACT_FIELD_NAME,
|
||||
# Note: it is currently unorthodox to use `field_names` in an ETL class,
|
||||
# but I think that's the direction we'd like to move all ETL classes. - LMB
|
||||
"ACSTOTPOP": field_names.TOTAL_POP_FIELD,
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||
"CANCER": field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
||||
"RESP": field_names.RESPIRATORY_HAZARD_FIELD,
|
||||
"DSLPM": field_names.DIESEL_FIELD,
|
||||
|
@ -81,14 +80,6 @@ class EJSCREENETL(ExtractTransformLoad):
|
|||
"OVER64PCT": field_names.OVER_64_FIELD,
|
||||
"UNDER5PCT": field_names.UNDER_5_FIELD,
|
||||
"PRE1960PCT": field_names.LEAD_PAINT_FIELD,
|
||||
"UST": field_names.UST_FIELD, # added for 2021 update
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving EJScreen CSV")
|
||||
# write nationwide csv
|
||||
self.CSV_PATH.mkdir(parents=True, exist_ok=True)
|
||||
self.df[self.COLUMNS_TO_KEEP].to_csv(
|
||||
self.CSV_PATH / "usa.csv", index=False
|
||||
)
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
|
@ -58,7 +57,6 @@ class EJSCREENAreasOfConcernETL(ExtractTransformLoad):
|
|||
|
||||
# TO DO: As a one off we did all the processing in a separate Notebook
|
||||
# Can add here later for a future PR
|
||||
pass
|
||||
|
||||
def load(self) -> None:
|
||||
if self.ejscreen_areas_of_concern_data_exists():
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -20,7 +22,17 @@ class EPARiskScreeningEnvironmentalIndicatorsETL(ExtractTransformLoad):
|
|||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.AGGREGATED_RSEI_SCORE_FILE_URL = "http://abt-rsei.s3.amazonaws.com/microdata2019/census_agg/CensusMicroTracts2019_2019_aggregated.zip"
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.AGGREGATED_RSEI_SCORE_FILE_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"epa_rsei/CensusMicroTracts2019_2019_aggregated.zip"
|
||||
)
|
||||
else:
|
||||
self.AGGREGATED_RSEI_SCORE_FILE_URL = (
|
||||
"http://abt-rsei.s3.amazonaws.com/microdata2019/"
|
||||
"census_agg/CensusMicroTracts2019_2019_aggregated.zip"
|
||||
)
|
||||
|
||||
self.OUTPUT_PATH: Path = self.DATA_PATH / "dataset" / "epa_rsei"
|
||||
self.EPA_RSEI_SCORE_THRESHOLD_CUTOFF = 0.75
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
# FSF flood risk data
|
||||
|
||||
Flood risk computed as 1 in 100 year flood zone
|
|
@ -0,0 +1,86 @@
|
|||
# pylint: disable=unsubscriptable-object
|
||||
# pylint: disable=unsupported-assignment-operation
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class FloodRiskETL(ExtractTransformLoad):
|
||||
"""ETL class for the First Street Foundation flood risk dataset"""
|
||||
|
||||
NAME = "fsf_flood_risk"
|
||||
# These data were emailed to the J40 team while first street got
|
||||
# their official data sharing channels setup.
|
||||
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_flood.zip"
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
# Output score variables (values set on datasets.yml) for linting purposes
|
||||
COUNT_PROPERTIES: str
|
||||
PROPERTIES_AT_RISK_FROM_FLOODING_TODAY: str
|
||||
PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS: str
|
||||
SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY: str
|
||||
SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS: str
|
||||
|
||||
def __init__(self):
|
||||
# define the full path for the input CSV file
|
||||
self.INPUT_CSV = (
|
||||
self.get_tmp_path() / "fsf_flood" / "flood-tract2010.csv"
|
||||
)
|
||||
|
||||
# this is the main dataframe
|
||||
self.df: pd.DataFrame
|
||||
|
||||
# Start dataset-specific vars here
|
||||
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME = "count_properties"
|
||||
self.COUNT_PROPERTIES_AT_RISK_TODAY = "mid_depth_100_year00"
|
||||
self.COUNT_PROPERTIES_AT_RISK_30_YEARS = "mid_depth_100_year30"
|
||||
self.CLIP_PROPERTIES_COUNT = 250
|
||||
|
||||
def transform(self) -> None:
|
||||
"""Reads the unzipped data file into memory and applies the following
|
||||
transformations to prepare it for the load() method:
|
||||
|
||||
- Renames the Census Tract column to match the other datasets
|
||||
- Calculates share of properties at risk, left-clipping number of properties at 250
|
||||
"""
|
||||
logger.info("Transforming National Risk Index Data")
|
||||
|
||||
# read in the unzipped csv data source then rename the
|
||||
# Census Tract column for merging
|
||||
df_fsf_flood: pd.DataFrame = pd.read_csv(
|
||||
self.INPUT_CSV,
|
||||
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
df_fsf_flood[self.GEOID_TRACT_FIELD_NAME] = df_fsf_flood[
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||
].str.zfill(11)
|
||||
|
||||
df_fsf_flood[self.COUNT_PROPERTIES] = df_fsf_flood[
|
||||
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME
|
||||
].clip(lower=self.CLIP_PROPERTIES_COUNT)
|
||||
|
||||
df_fsf_flood[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY] = (
|
||||
df_fsf_flood[self.COUNT_PROPERTIES_AT_RISK_TODAY]
|
||||
/ df_fsf_flood[self.COUNT_PROPERTIES]
|
||||
)
|
||||
df_fsf_flood[
|
||||
self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS
|
||||
] = (
|
||||
df_fsf_flood[self.COUNT_PROPERTIES_AT_RISK_30_YEARS]
|
||||
/ df_fsf_flood[self.COUNT_PROPERTIES]
|
||||
)
|
||||
|
||||
# Assign the final df to the class' output_df for the load method with rename
|
||||
self.output_df = df_fsf_flood.rename(
|
||||
columns={
|
||||
self.COUNT_PROPERTIES_AT_RISK_TODAY: self.PROPERTIES_AT_RISK_FROM_FLOODING_TODAY,
|
||||
self.COUNT_PROPERTIES_AT_RISK_30_YEARS: self.PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS,
|
||||
}
|
||||
)
|
|
@ -0,0 +1,3 @@
|
|||
# FSF wildfire risk data
|
||||
|
||||
Fire risk computed as >= 0.003 burn risk probability
|
|
@ -0,0 +1,83 @@
|
|||
# pylint: disable=unsubscriptable-object
|
||||
# pylint: disable=unsupported-assignment-operation
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class WildfireRiskETL(ExtractTransformLoad):
|
||||
"""ETL class for the First Street Foundation wildfire risk dataset"""
|
||||
|
||||
NAME = "fsf_wildfire_risk"
|
||||
# These data were emailed to the J40 team while first street got
|
||||
# their official data sharing channels setup.
|
||||
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_fire.zip"
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False
|
||||
|
||||
# Output score variables (values set on datasets.yml) for linting purposes
|
||||
COUNT_PROPERTIES: str
|
||||
PROPERTIES_AT_RISK_FROM_FIRE_TODAY: str
|
||||
PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS: str
|
||||
SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY: str
|
||||
SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS: str
|
||||
|
||||
def __init__(self):
|
||||
# define the full path for the input CSV file
|
||||
self.INPUT_CSV = self.get_tmp_path() / "fsf_fire" / "fire-tract2010.csv"
|
||||
|
||||
# this is the main dataframe
|
||||
self.df: pd.DataFrame
|
||||
|
||||
# Start dataset-specific vars here
|
||||
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME = "count_properties"
|
||||
self.COUNT_PROPERTIES_AT_RISK_TODAY = "burnprob_year00_flag"
|
||||
self.COUNT_PROPERTIES_AT_RISK_30_YEARS = "burnprob_year30_flag"
|
||||
self.CLIP_PROPERTIES_COUNT = 250
|
||||
|
||||
def transform(self) -> None:
|
||||
"""Reads the unzipped data file into memory and applies the following
|
||||
transformations to prepare it for the load() method:
|
||||
|
||||
- Renames the Census Tract column to match the other datasets
|
||||
- Calculates share of properties at risk, left-clipping number of properties at 250
|
||||
"""
|
||||
logger.info("Transforming National Risk Index Data")
|
||||
# read in the unzipped csv data source then rename the
|
||||
# Census Tract column for merging
|
||||
df_fsf_fire: pd.DataFrame = pd.read_csv(
|
||||
self.INPUT_CSV,
|
||||
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
df_fsf_fire[self.GEOID_TRACT_FIELD_NAME] = df_fsf_fire[
|
||||
self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||
].str.zfill(11)
|
||||
|
||||
df_fsf_fire[self.COUNT_PROPERTIES] = df_fsf_fire[
|
||||
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME
|
||||
].clip(lower=self.CLIP_PROPERTIES_COUNT)
|
||||
|
||||
df_fsf_fire[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY] = (
|
||||
df_fsf_fire[self.COUNT_PROPERTIES_AT_RISK_TODAY]
|
||||
/ df_fsf_fire[self.COUNT_PROPERTIES]
|
||||
)
|
||||
df_fsf_fire[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS] = (
|
||||
df_fsf_fire[self.COUNT_PROPERTIES_AT_RISK_30_YEARS]
|
||||
/ df_fsf_fire[self.COUNT_PROPERTIES]
|
||||
)
|
||||
|
||||
# Assign the final df to the class' output_df for the load method with rename
|
||||
self.output_df = df_fsf_fire.rename(
|
||||
columns={
|
||||
self.COUNT_PROPERTIES_AT_RISK_TODAY: self.PROPERTIES_AT_RISK_FROM_FIRE_TODAY,
|
||||
self.COUNT_PROPERTIES_AT_RISK_30_YEARS: self.PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS,
|
||||
}
|
||||
)
|
92
data/data-pipeline/data_pipeline/etl/sources/geo_utils.py
Normal file
92
data/data-pipeline/data_pipeline/etl/sources/geo_utils.py
Normal file
|
@ -0,0 +1,92 @@
|
|||
"""Utililities for turning geographies into tracts, using census data"""
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import geopandas as gpd
|
||||
from data_pipeline.etl.sources.tribal.etl import TribalETL
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
from .census.etl import CensusETL
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
@lru_cache()
|
||||
def get_tract_geojson(
|
||||
_tract_data_path: Optional[Path] = None,
|
||||
) -> gpd.GeoDataFrame:
|
||||
logger.info("Loading tract geometry data from census ETL")
|
||||
GEOJSON_PATH = _tract_data_path
|
||||
if GEOJSON_PATH is None:
|
||||
GEOJSON_PATH = CensusETL.NATIONAL_TRACT_JSON_PATH
|
||||
if not GEOJSON_PATH.exists():
|
||||
logger.debug("Census data has not been computed, running")
|
||||
census_etl = CensusETL()
|
||||
census_etl.extract()
|
||||
census_etl.transform()
|
||||
census_etl.load()
|
||||
tract_data = gpd.read_file(
|
||||
GEOJSON_PATH,
|
||||
include_fields=["GEOID10"],
|
||||
)
|
||||
tract_data = tract_data.rename(
|
||||
columns={"GEOID10": "GEOID10_TRACT"}, errors="raise"
|
||||
)
|
||||
return tract_data
|
||||
|
||||
|
||||
@lru_cache()
|
||||
def get_tribal_geojson(
|
||||
_tribal_data_path: Optional[Path] = None,
|
||||
) -> gpd.GeoDataFrame:
|
||||
logger.info("Loading Tribal geometry data from Tribal ETL")
|
||||
GEOJSON_PATH = _tribal_data_path
|
||||
if GEOJSON_PATH is None:
|
||||
GEOJSON_PATH = TribalETL().NATIONAL_TRIBAL_GEOJSON_PATH
|
||||
if not GEOJSON_PATH.exists():
|
||||
logger.debug("Tribal data has not been computed, running")
|
||||
tribal_etl = TribalETL()
|
||||
tribal_etl.extract()
|
||||
tribal_etl.transform()
|
||||
tribal_etl.load()
|
||||
tribal_data = gpd.read_file(
|
||||
GEOJSON_PATH,
|
||||
)
|
||||
return tribal_data
|
||||
|
||||
|
||||
def add_tracts_for_geometries(
|
||||
df: gpd.GeoDataFrame, tract_data: Optional[gpd.GeoDataFrame] = None
|
||||
) -> gpd.GeoDataFrame:
|
||||
"""Adds tract-geoids to dataframe df that contains spatial geometries
|
||||
|
||||
Depends on CensusETL for the geodata to do its conversion
|
||||
|
||||
Args:
|
||||
df (GeoDataFrame): a geopandas GeoDataFrame with a point geometry column
|
||||
tract_data (GeoDataFrame): optional override to directly pass a
|
||||
geodataframe of the tract boundaries. Also helps simplify testing.
|
||||
|
||||
Returns:
|
||||
GeoDataFrame: the above dataframe, with an additional GEOID10_TRACT column that
|
||||
maps the points in DF to census tracts and a geometry column for later
|
||||
spatial analysis
|
||||
"""
|
||||
logger.debug("Appending tract data to dataframe")
|
||||
|
||||
if tract_data is None:
|
||||
tract_data = get_tract_geojson()
|
||||
else:
|
||||
logger.debug("Using existing tract data.")
|
||||
|
||||
assert (
|
||||
tract_data.crs == df.crs
|
||||
), f"Dataframe must be projected to {tract_data.crs}"
|
||||
df = gpd.sjoin(
|
||||
df,
|
||||
tract_data[["GEOID10_TRACT", "geometry"]],
|
||||
how="inner",
|
||||
op="intersects",
|
||||
)
|
||||
return df
|
|
@ -1,16 +1,18 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import (
|
||||
get_module_logger,
|
||||
unzip_file_from_url,
|
||||
)
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class GeoCorrETL(ExtractTransformLoad):
|
||||
NAME = "geocorr"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
|
||||
def __init__(self):
|
||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "geocorr"
|
||||
|
||||
|
@ -24,6 +26,10 @@ class GeoCorrETL(ExtractTransformLoad):
|
|||
self.GEOCORR_PLACES_URL = "https://justice40-data.s3.amazonaws.com/data-sources/geocorr_urban_rural.csv.zip"
|
||||
self.GEOCORR_GEOID_FIELD_NAME = "GEOID10_TRACT"
|
||||
self.URBAN_HEURISTIC_FIELD_NAME = "Urban Heuristic Flag"
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.URBAN_HEURISTIC_FIELD_NAME,
|
||||
]
|
||||
|
||||
self.df: pd.DataFrame
|
||||
|
||||
|
@ -35,13 +41,11 @@ class GeoCorrETL(ExtractTransformLoad):
|
|||
file_url=settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/geocorr_urban_rural.csv.zip",
|
||||
download_path=self.get_tmp_path(),
|
||||
unzipped_file_path=self.get_tmp_path() / "geocorr",
|
||||
unzipped_file_path=self.get_tmp_path(),
|
||||
)
|
||||
|
||||
self.df = pd.read_csv(
|
||||
filepath_or_buffer=self.get_tmp_path()
|
||||
/ "geocorr"
|
||||
/ "geocorr_urban_rural.csv",
|
||||
filepath_or_buffer=self.get_tmp_path() / "geocorr_urban_rural.csv",
|
||||
dtype={
|
||||
self.GEOCORR_GEOID_FIELD_NAME: "string",
|
||||
},
|
||||
|
@ -50,22 +54,10 @@ class GeoCorrETL(ExtractTransformLoad):
|
|||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting GeoCorr Urban Rural Map transform")
|
||||
# Put in logic from Jupyter Notebook transform when we switch in the hyperlink to Geocorr
|
||||
|
||||
self.df.rename(
|
||||
self.output_df = self.df.rename(
|
||||
columns={
|
||||
"urban_heuristic_flag": self.URBAN_HEURISTIC_FIELD_NAME,
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
# Put in logic from Jupyter Notebook transform when we switch in the hyperlink to Geocorr
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving GeoCorr Urban Rural Map Data")
|
||||
|
||||
# mkdir census
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
||||
|
|
|
@ -0,0 +1,70 @@
|
|||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class HistoricRedliningETL(ExtractTransformLoad):
|
||||
NAME = "historic_redlining"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
EXPECTED_MISSING_STATES = [
|
||||
"10",
|
||||
"11",
|
||||
"16",
|
||||
"23",
|
||||
"30",
|
||||
"32",
|
||||
"35",
|
||||
"38",
|
||||
"46",
|
||||
"50",
|
||||
"56",
|
||||
]
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
ALASKA_AND_HAWAII_EXPECTED_IN_DATA: bool = False
|
||||
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/HRS_2010.zip"
|
||||
|
||||
def __init__(self):
|
||||
self.CSV_PATH = self.DATA_PATH / "dataset" / "historic_redlining"
|
||||
|
||||
self.HISTORIC_REDLINING_FILE_PATH = (
|
||||
self.get_tmp_path() / "HRS_2010.xlsx"
|
||||
)
|
||||
|
||||
self.REDLINING_SCALAR = "Tract-level redlining score"
|
||||
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.REDLINING_SCALAR,
|
||||
]
|
||||
self.df: pd.DataFrame
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Transforming Historic Redlining Data")
|
||||
# this is obviously temporary
|
||||
historic_redlining_data = pd.read_excel(
|
||||
self.HISTORIC_REDLINING_FILE_PATH
|
||||
)
|
||||
historic_redlining_data[self.GEOID_TRACT_FIELD_NAME] = (
|
||||
historic_redlining_data["GEOID10"].astype(str).str.zfill(11)
|
||||
)
|
||||
historic_redlining_data = historic_redlining_data.rename(
|
||||
columns={"HRS2010": self.REDLINING_SCALAR}
|
||||
)
|
||||
|
||||
logger.info(f"{historic_redlining_data.columns}")
|
||||
|
||||
# Calculate lots of different score thresholds for convenience
|
||||
for threshold in [3.25, 3.5, 3.75]:
|
||||
historic_redlining_data[
|
||||
f"{self.REDLINING_SCALAR} meets or exceeds {round(threshold, 2)}"
|
||||
] = (historic_redlining_data[self.REDLINING_SCALAR] >= threshold)
|
||||
## NOTE We add to columns to keep here
|
||||
self.COLUMNS_TO_KEEP.append(
|
||||
f"{self.REDLINING_SCALAR} meets or exceeds {round(threshold, 2)}"
|
||||
)
|
||||
|
||||
self.output_df = historic_redlining_data
|
|
@ -1,9 +1,9 @@
|
|||
import pandas as pd
|
||||
from pandas.errors import EmptyDataError
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
from pandas.errors import EmptyDataError
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -35,7 +35,7 @@ class HousingTransportationETL(ExtractTransformLoad):
|
|||
|
||||
# New file name:
|
||||
tmp_csv_file_path = (
|
||||
zip_file_dir / f"htaindex_data_tracts_{fips}.csv"
|
||||
zip_file_dir / f"htaindex2019_data_tracts_{fips}.csv"
|
||||
)
|
||||
|
||||
try:
|
||||
|
|
|
@ -1,16 +1,28 @@
|
|||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class HudHousingETL(ExtractTransformLoad):
|
||||
NAME = "hud_housing"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
|
||||
def __init__(self):
|
||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "hud_housing"
|
||||
self.GEOID_TRACT_FIELD_NAME = "GEOID10_TRACT"
|
||||
self.HOUSING_FTP_URL = "https://www.huduser.gov/portal/datasets/cp/2014thru2018-140-csv.zip"
|
||||
self.HOUSING_ZIP_FILE_DIR = self.get_tmp_path() / "hud_housing"
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.HOUSING_FTP_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"hud_housing/2014thru2018-140-csv.zip"
|
||||
)
|
||||
else:
|
||||
self.HOUSING_FTP_URL = "https://www.huduser.gov/portal/datasets/cp/2014thru2018-140-csv.zip"
|
||||
|
||||
self.HOUSING_ZIP_FILE_DIR = self.get_tmp_path()
|
||||
|
||||
# We measure households earning less than 80% of HUD Area Median Family Income by county
|
||||
# and paying greater than 30% of their income to housing costs.
|
||||
|
@ -19,6 +31,17 @@ class HudHousingETL(ExtractTransformLoad):
|
|||
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME = (
|
||||
"HOUSING_BURDEN_DENOMINATOR"
|
||||
)
|
||||
self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME = (
|
||||
"Share of homes with no kitchen or indoor plumbing (percent)"
|
||||
)
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_NUMERATOR_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_FIELD_NAME,
|
||||
self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME,
|
||||
"DENOM INCL NOT COMPUTED",
|
||||
]
|
||||
|
||||
# Note: some variable definitions.
|
||||
# HUD-adjusted median family income (HAMFI).
|
||||
|
@ -27,7 +50,8 @@ class HudHousingETL(ExtractTransformLoad):
|
|||
# - incomplete plumbing facilities,
|
||||
# - more than 1 person per room,
|
||||
# - cost burden greater than 30%.
|
||||
# Table 8 is the desired table.
|
||||
# Table 8 is the desired table for housing burden
|
||||
# Table 3 is the desired table for no kitchen or indoor plumbing
|
||||
|
||||
self.df: pd.DataFrame
|
||||
|
||||
|
@ -38,124 +62,74 @@ class HudHousingETL(ExtractTransformLoad):
|
|||
self.HOUSING_ZIP_FILE_DIR,
|
||||
)
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Transforming HUD Housing Data")
|
||||
|
||||
def _read_chas_table(self, file_name):
|
||||
# New file name:
|
||||
tmp_csv_file_path = self.HOUSING_ZIP_FILE_DIR / "140" / "Table8.csv"
|
||||
self.df = pd.read_csv(
|
||||
tmp_csv_file_path = self.HOUSING_ZIP_FILE_DIR / "140" / file_name
|
||||
tmp_df = pd.read_csv(
|
||||
filepath_or_buffer=tmp_csv_file_path,
|
||||
encoding="latin-1",
|
||||
)
|
||||
|
||||
# Rename and reformat block group ID
|
||||
self.df.rename(
|
||||
columns={"geoid": self.GEOID_TRACT_FIELD_NAME}, inplace=True
|
||||
)
|
||||
|
||||
# The CHAS data has census tract ids such as `14000US01001020100`
|
||||
# Whereas the rest of our data uses, for the same tract, `01001020100`.
|
||||
# the characters before `US`:
|
||||
self.df[self.GEOID_TRACT_FIELD_NAME] = self.df[
|
||||
self.GEOID_TRACT_FIELD_NAME
|
||||
].str.replace(r"^.*?US", "", regex=True)
|
||||
# This reformats and renames this field.
|
||||
tmp_df[self.GEOID_TRACT_FIELD_NAME] = tmp_df["geoid"].str.replace(
|
||||
r"^.*?US", "", regex=True
|
||||
)
|
||||
|
||||
return tmp_df
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Transforming HUD Housing Data")
|
||||
|
||||
table_8 = self._read_chas_table("Table8.csv")
|
||||
table_3 = self._read_chas_table("Table3.csv")
|
||||
|
||||
self.df = table_8.merge(
|
||||
table_3, how="outer", on=self.GEOID_TRACT_FIELD_NAME
|
||||
)
|
||||
|
||||
# Calculate share that lacks indoor plumbing or kitchen
|
||||
# This is computed as
|
||||
# (
|
||||
# owner occupied without plumbing + renter occupied without plumbing
|
||||
# ) / (
|
||||
# total of owner and renter occupied
|
||||
# )
|
||||
self.df[self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME] = (
|
||||
# T3_est3: owner-occupied lacking complete plumbing or kitchen facilities for all levels of income
|
||||
# T3_est46: subtotal: renter-occupied lacking complete plumbing or kitchen facilities for all levels of income
|
||||
# T3_est2: subtotal: owner-occupied for all levels of income
|
||||
# T3_est45: subtotal: renter-occupied for all levels of income
|
||||
self.df["T3_est3"]
|
||||
+ self.df["T3_est46"]
|
||||
) / (self.df["T3_est2"] + self.df["T3_est45"])
|
||||
|
||||
# Calculate housing burden
|
||||
# This is quite a number of steps. It does not appear to be accessible nationally in a simpler format, though.
|
||||
# See "CHAS data dictionary 12-16.xlsx"
|
||||
|
||||
# Owner occupied numerator fields
|
||||
OWNER_OCCUPIED_NUMERATOR_FIELDS = [
|
||||
# Column Name
|
||||
# Line_Type
|
||||
# Tenure
|
||||
# Household income
|
||||
# Cost burden
|
||||
# Facilities
|
||||
"T8_est7",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# less than or equal to 30% of HAMFI
|
||||
# greater than 30% but less than or equal to 50%
|
||||
# All
|
||||
"T8_est10",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# less than or equal to 30% of HAMFI
|
||||
# greater than 50%
|
||||
# All
|
||||
"T8_est20",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 30% but less than or equal to 50% of HAMFI
|
||||
# greater than 30% but less than or equal to 50%
|
||||
# All
|
||||
"T8_est23",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 30% but less than or equal to 50% of HAMFI
|
||||
# greater than 50%
|
||||
# All
|
||||
"T8_est33",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 50% but less than or equal to 80% of HAMFI
|
||||
# greater than 30% but less than or equal to 50%
|
||||
# All
|
||||
"T8_est36",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 50% but less than or equal to 80% of HAMFI
|
||||
# greater than 50%
|
||||
# All
|
||||
"T8_est7", # Owner, less than or equal to 30% of HAMFI, greater than 30% but less than or equal to 50%
|
||||
"T8_est10", # Owner, less than or equal to 30% of HAMFI, greater than 50%
|
||||
"T8_est20", # Owner, greater than 30% but less than or equal to 50% of HAMFI, greater than 30% but less than or equal to 50%
|
||||
"T8_est23", # Owner, greater than 30% but less than or equal to 50% of HAMFI, greater than 50%
|
||||
"T8_est33", # Owner, greater than 50% but less than or equal to 80% of HAMFI, greater than 30% but less than or equal to 50%
|
||||
"T8_est36", # Owner, greater than 50% but less than or equal to 80% of HAMFI, greater than 50%
|
||||
]
|
||||
|
||||
# These rows have the values where HAMFI was not computed, b/c of no or negative income.
|
||||
# They are in the same order as the rows above
|
||||
OWNER_OCCUPIED_NOT_COMPUTED_FIELDS = [
|
||||
# Column Name
|
||||
# Line_Type
|
||||
# Tenure
|
||||
# Household income
|
||||
# Cost burden
|
||||
# Facilities
|
||||
"T8_est13",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# less than or equal to 30% of HAMFI
|
||||
# not computed (no/negative income)
|
||||
# All
|
||||
"T8_est26",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 30% but less than or equal to 50% of HAMFI
|
||||
# not computed (no/negative income)
|
||||
# All
|
||||
"T8_est39",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 50% but less than or equal to 80% of HAMFI
|
||||
# not computed (no/negative income)
|
||||
# All
|
||||
"T8_est52",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 80% but less than or equal to 100% of HAMFI
|
||||
# not computed (no/negative income)
|
||||
# All
|
||||
"T8_est65",
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# greater than 100% of HAMFI
|
||||
# not computed (no/negative income)
|
||||
# All
|
||||
]
|
||||
|
||||
# This represents all owner-occupied housing units
|
||||
OWNER_OCCUPIED_POPULATION_FIELD = "T8_est2"
|
||||
# Subtotal
|
||||
# Owner occupied
|
||||
# All
|
||||
# All
|
||||
# All
|
||||
|
||||
# Renter occupied numerator fields
|
||||
RENTER_OCCUPIED_NUMERATOR_FIELDS = [
|
||||
|
@ -280,18 +254,4 @@ class HudHousingETL(ExtractTransformLoad):
|
|||
float
|
||||
)
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving HUD Housing Data")
|
||||
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Drop unnecessary fields
|
||||
self.df[
|
||||
[
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_NUMERATOR_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME,
|
||||
self.HOUSING_BURDEN_FIELD_NAME,
|
||||
"DENOM INCL NOT COMPUTED",
|
||||
]
|
||||
].to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
||||
self.output_df = self.df
|
||||
|
|
|
@ -1,16 +1,27 @@
|
|||
import pandas as pd
|
||||
import requests
|
||||
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class HudRecapETL(ExtractTransformLoad):
|
||||
def __init__(self):
|
||||
# pylint: disable=line-too-long
|
||||
self.HUD_RECAP_CSV_URL = "https://opendata.arcgis.com/api/v3/datasets/56de4edea8264fe5a344da9811ef5d6e_0/downloads/data?format=csv&spatialRefId=4326" # noqa: E501
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.HUD_RECAP_CSV_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"hud_recap/Racially_or_Ethnically_Concentrated_Areas_of_Poverty__R_ECAPs_.csv"
|
||||
)
|
||||
else:
|
||||
self.HUD_RECAP_CSV_URL = (
|
||||
"https://opendata.arcgis.com/api/v3/datasets/"
|
||||
"56de4edea8264fe5a344da9811ef5d6e_0/downloads/data?format=csv&spatialRefId=4326"
|
||||
)
|
||||
|
||||
self.HUD_RECAP_CSV = (
|
||||
self.get_tmp_path()
|
||||
/ "Racially_or_Ethnically_Concentrated_Areas_of_Poverty__R_ECAPs_.csv"
|
||||
|
@ -26,7 +37,11 @@ class HudRecapETL(ExtractTransformLoad):
|
|||
|
||||
def extract(self) -> None:
|
||||
logger.info("Downloading HUD Recap Data")
|
||||
download = requests.get(self.HUD_RECAP_CSV_URL, verify=None)
|
||||
download = requests.get(
|
||||
self.HUD_RECAP_CSV_URL,
|
||||
verify=None,
|
||||
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||
)
|
||||
file_contents = download.content
|
||||
csv_file = open(self.HUD_RECAP_CSV, "wb")
|
||||
csv_file.write(file_contents)
|
||||
|
|
|
@ -1,10 +1,9 @@
|
|||
import pandas as pd
|
||||
import geopandas as gpd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -96,4 +95,3 @@ class MappingForEJETL(ExtractTransformLoad):
|
|||
|
||||
def validate(self) -> None:
|
||||
logger.info("Validating Mapping For EJ Data")
|
||||
pass
|
||||
|
|
|
@ -37,4 +37,4 @@ Oklahoma City,90R,D
|
|||
Milwaukee Co.,S-D1,D
|
||||
Milwaukee Co.,S-D2,D
|
||||
Milwaukee Co.,S-D3,D
|
||||
Milwaukee Co.,S-D4,D
|
||||
Milwaukee Co.,S-D4,D
|
||||
|
|
|
|
@ -1,10 +1,12 @@
|
|||
import pathlib
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import download_file_from_url, get_module_logger
|
||||
from data_pipeline.utils import download_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -21,10 +23,16 @@ class MappingInequalityETL(ExtractTransformLoad):
|
|||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.MAPPING_INEQUALITY_CSV_URL = (
|
||||
"https://raw.githubusercontent.com/americanpanorama/Census_HOLC_Research/"
|
||||
"main/2010_Census_Tracts/holc_tract_lookup.csv"
|
||||
)
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.MAPPING_INEQUALITY_CSV_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"mapping_inequality/holc_tract_lookup.csv"
|
||||
)
|
||||
else:
|
||||
self.MAPPING_INEQUALITY_CSV_URL = (
|
||||
"https://raw.githubusercontent.com/americanpanorama/Census_HOLC_Research/"
|
||||
"main/2010_Census_Tracts/holc_tract_lookup.csv"
|
||||
)
|
||||
self.MAPPING_INEQUALITY_CSV = (
|
||||
self.get_tmp_path() / "holc_tract_lookup.csv"
|
||||
)
|
||||
|
@ -47,16 +55,21 @@ class MappingInequalityETL(ExtractTransformLoad):
|
|||
self.HOLC_GRADE_AND_ID_FIELD: str = "holc_id"
|
||||
self.CITY_INPUT_FIELD: str = "city"
|
||||
|
||||
self.HOLC_GRADE_D_FIELD: str = "HOLC Grade D"
|
||||
self.HOLC_GRADE_D_FIELD: str = "HOLC Grade D (hazardous)"
|
||||
self.HOLC_GRADE_C_FIELD: str = "HOLC Grade C (declining)"
|
||||
self.HOLC_GRADE_MANUAL_FIELD: str = "HOLC Grade (manually mapped)"
|
||||
self.HOLC_GRADE_DERIVED_FIELD: str = "HOLC Grade (derived)"
|
||||
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_C_OR_D_TRACT_50_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_20_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_50_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_75_PERCENT_FIELD,
|
||||
field_names.REDLINED_SHARE,
|
||||
]
|
||||
|
||||
self.df: pd.DataFrame
|
||||
|
@ -113,34 +126,58 @@ class MappingInequalityETL(ExtractTransformLoad):
|
|||
how="left",
|
||||
)
|
||||
|
||||
# Create a single field that combines the 'derived' grade D field with the
|
||||
# manually mapped grade D field into a single grade D field.
|
||||
merged_df[self.HOLC_GRADE_D_FIELD] = np.where(
|
||||
(merged_df[self.HOLC_GRADE_DERIVED_FIELD] == "D")
|
||||
| (merged_df[self.HOLC_GRADE_MANUAL_FIELD] == "D"),
|
||||
True,
|
||||
None,
|
||||
)
|
||||
# Create a single field that combines the 'derived' grade C and D fields with the
|
||||
# manually mapped grade C and D field into a single grade C and D field.
|
||||
## Note: there are no manually derived C tracts at the moment
|
||||
|
||||
# Start grouping by, to sum all of the grade D parts of each tract.
|
||||
grouped_df = (
|
||||
merged_df.groupby(
|
||||
by=[
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.HOLC_GRADE_D_FIELD,
|
||||
],
|
||||
# Keep the nulls, so we know the non-D proportion.
|
||||
dropna=False,
|
||||
)[self.TRACT_PROPORTION_FIELD]
|
||||
for grade, field_name in [
|
||||
("C", self.HOLC_GRADE_C_FIELD),
|
||||
("D", self.HOLC_GRADE_D_FIELD),
|
||||
]:
|
||||
merged_df[field_name] = np.where(
|
||||
(merged_df[self.HOLC_GRADE_DERIVED_FIELD] == grade)
|
||||
| (merged_df[self.HOLC_GRADE_MANUAL_FIELD] == grade),
|
||||
True,
|
||||
None,
|
||||
)
|
||||
|
||||
redlined_dataframes_list = [
|
||||
merged_df[merged_df[field].fillna(False)]
|
||||
.groupby(self.GEOID_TRACT_FIELD_NAME)[self.TRACT_PROPORTION_FIELD]
|
||||
.sum()
|
||||
.rename(new_name)
|
||||
for field, new_name in [
|
||||
(
|
||||
self.HOLC_GRADE_D_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||
),
|
||||
(
|
||||
self.HOLC_GRADE_C_FIELD,
|
||||
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||
),
|
||||
]
|
||||
]
|
||||
|
||||
# Group by tract ID to get tract proportions of just C or just D
|
||||
# This produces a single row per tract
|
||||
grouped_df = (
|
||||
pd.concat(
|
||||
redlined_dataframes_list,
|
||||
axis=1,
|
||||
)
|
||||
.fillna(0)
|
||||
.reset_index()
|
||||
)
|
||||
|
||||
# Create a field that is only the percent that is grade D.
|
||||
grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] = np.where(
|
||||
grouped_df[self.HOLC_GRADE_D_FIELD],
|
||||
grouped_df[self.TRACT_PROPORTION_FIELD],
|
||||
0,
|
||||
grouped_df[
|
||||
field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD
|
||||
] = grouped_df[
|
||||
[
|
||||
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||
]
|
||||
].sum(
|
||||
axis=1
|
||||
)
|
||||
|
||||
# Calculate some specific threshold cutoffs, for convenience.
|
||||
|
@ -154,15 +191,14 @@ class MappingInequalityETL(ExtractTransformLoad):
|
|||
grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] > 0.75
|
||||
)
|
||||
|
||||
# Drop the non-True values of `self.HOLC_GRADE_D_FIELD` -- we only
|
||||
# want one row per tract for future joins.
|
||||
# Note this means not all tracts will be in this data.
|
||||
# Note: this singleton comparison warning may be a pylint bug:
|
||||
# https://stackoverflow.com/questions/51657715/pylint-pandas-comparison-to-true-should-be-just-expr-or-expr-is-true-sin#comment90876517_51657715
|
||||
# pylint: disable=singleton-comparison
|
||||
grouped_df = grouped_df[
|
||||
grouped_df[self.HOLC_GRADE_D_FIELD] == True # noqa: E712
|
||||
]
|
||||
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_50_PERCENT_FIELD] = (
|
||||
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD] > 0.5
|
||||
)
|
||||
|
||||
# Create the indicator we will use
|
||||
grouped_df[field_names.REDLINED_SHARE] = (
|
||||
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD] > 0.5
|
||||
) & (grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] > 0)
|
||||
|
||||
# Sort for convenience.
|
||||
grouped_df.sort_values(by=self.GEOID_TRACT_FIELD_NAME, inplace=True)
|
||||
|
|
|
@ -8,7 +8,7 @@ According to the documentation:
|
|||
|
||||
There exist two data categories: Population Burden and Population Characteristics.
|
||||
|
||||
There are two indicators within Population Burden: Exposure, and Socioeconomic. Within Population Characteristics, there exist two indicators: Sensitive, Environmental Effects. Each respective indicator contains several relevant covariates, and an averaged score.
|
||||
There are two indicators within Population Burden: Exposure, and Socioeconomic. Within Population Characteristics, there exist two indicators: Sensitive, Environmental Effects. Each respective indicator contains several relevant covariates, and an averaged score.
|
||||
|
||||
The two "Pollution Burden" average scores are then averaged together and the result is multiplied by the average of the "Population Characteristics" categories to get the total EJ Score for each tract.
|
||||
|
||||
|
@ -20,4 +20,4 @@ Furthermore, it was determined that Bladensburg residents are at a higher risk o
|
|||
|
||||
Source:
|
||||
|
||||
Driver, A.; Mehdizadeh, C.; Bara-Garcia, S.; Bodenreider, C.; Lewis, J.; Wilson, S. Utilization of the Maryland Environmental Justice Screening Tool: A Bladensburg, Maryland Case Study. Int. J. Environ. Res. Public Health 2019, 16, 348.
|
||||
Driver, A.; Mehdizadeh, C.; Bara-Garcia, S.; Bodenreider, C.; Lewis, J.; Wilson, S. Utilization of the Maryland Environmental Justice Screening Tool: A Bladensburg, Maryland Case Study. Int. J. Environ. Res. Public Health 2019, 16, 348.
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
from glob import glob
|
||||
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
# Michigan EJSCREEN
|
||||
|
||||
<!-- markdown-link-check-disable -->
|
||||
The Michigan EJSCREEN description and publication can be found [here](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/149105/AssessingtheStateofEnvironmentalJusticeinMichigan_344.pdf).
|
||||
<!-- markdown-link-check-enable-->
|
||||
|
@ -30,4 +29,4 @@ Sources:
|
|||
* Minnesota Pollution Control Agency. (2015, December 15). Environmental Justice Framework Report.
|
||||
Retrieved from https://www.pca.state.mn.us/sites/default/files/p-gen5-05.pdf.
|
||||
|
||||
* Faust, J., L. August, K. Bangia, V. Galaviz, J. Leichty, S. Prasad… and L. Zeise. (2017, January). Update to the California Communities Environmental Health Screening Tool CalEnviroScreen 3.0. Retrieved from OEHHA website: https://oehha.ca.gov/media/downloads/calenviroscreen/report/ces3report.pdf
|
||||
* Faust, J., L. August, K. Bangia, V. Galaviz, J. Leichty, S. Prasad… and L. Zeise. (2017, January). Update to the California Communities Environmental Health Screening Tool CalEnviroScreen 3.0. Retrieved from OEHHA website: https://oehha.ca.gov/media/downloads/calenviroscreen/report/ces3report.pdf
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
# but it may be a known bug. https://github.com/PyCQA/pylint/issues/1498
|
||||
# pylint: disable=unsubscriptable-object
|
||||
# pylint: disable=unsupported-assignment-operation
|
||||
|
||||
import pandas as pd
|
||||
|
||||
from data_pipeline.etl.base import ExtractTransformLoad, ValidGeoLevel
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -15,8 +15,21 @@ class NationalRiskIndexETL(ExtractTransformLoad):
|
|||
"""ETL class for the FEMA National Risk Index dataset"""
|
||||
|
||||
NAME = "national_risk_index"
|
||||
SOURCE_URL = "https://hazards.fema.gov/nri/Content/StaticDocuments/DataDownload//NRI_Table_CensusTracts/NRI_Table_CensusTracts.zip"
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
SOURCE_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"national_risk_index/NRI_Table_CensusTracts.zip"
|
||||
)
|
||||
else:
|
||||
SOURCE_URL = (
|
||||
"https://hazards.fema.gov/nri/Content/StaticDocuments/DataDownload/"
|
||||
"NRI_Table_CensusTracts/NRI_Table_CensusTracts.zip"
|
||||
)
|
||||
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
# Output score variables (values set on datasets.yml) for linting purposes
|
||||
RISK_INDEX_EXPECTED_ANNUAL_LOSS_SCORE_FIELD_NAME: str
|
||||
|
@ -33,9 +46,6 @@ class NationalRiskIndexETL(ExtractTransformLoad):
|
|||
AGRIVALUE_LOWER_BOUND = 408000
|
||||
|
||||
def __init__(self):
|
||||
# load YAML config
|
||||
self.DATASET_CONFIG = super().yaml_config_load()
|
||||
|
||||
# define the full path for the input CSV file
|
||||
self.INPUT_CSV = self.get_tmp_path() / "NRI_Table_CensusTracts.csv"
|
||||
|
||||
|
@ -156,6 +166,27 @@ class NationalRiskIndexETL(ExtractTransformLoad):
|
|||
lower=self.AGRIVALUE_LOWER_BOUND
|
||||
)
|
||||
|
||||
## Check that this clip worked -- that the only place the value has changed is when the clip took effect
|
||||
base_expectation = (
|
||||
disaster_agriculture_sum_series
|
||||
/ df_nri[self.AGRICULTURAL_VALUE_INPUT_FIELD_NAME]
|
||||
)
|
||||
assert (
|
||||
df_nri[
|
||||
df_nri[self.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD_NAME]
|
||||
!= base_expectation
|
||||
][self.AGRICULTURAL_VALUE_INPUT_FIELD_NAME].max()
|
||||
<= self.AGRIVALUE_LOWER_BOUND
|
||||
), (
|
||||
"Clipping the agrivalue did not work. There are places where the value doesn't "
|
||||
+ "match an unclipped ratio, even where the agrivalue is above the lower bound!"
|
||||
)
|
||||
|
||||
assert (
|
||||
df_nri[self.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD_NAME]
|
||||
!= base_expectation
|
||||
).sum() > 0, "Clipping the agrivalue did nothing!"
|
||||
|
||||
# This produces a boolean that is True in the case of non-zero agricultural value
|
||||
df_nri[self.CONTAINS_AGRIVALUE] = (
|
||||
df_nri[self.AGRICULTURAL_VALUE_INPUT_FIELD_NAME] > 0
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
# Nature deprived communities data
|
||||
|
||||
The following dataset was compiled by TPL (Trust for Public Lands) using NCLD data. We define as: AREA - [CROPLAND] - [IMPERVIOUS SURFACES].
|
||||
|
||||
## Codebook
|
||||
- GEOID10 – Census tract ID
|
||||
- SF – State Name
|
||||
- CF – County Name
|
||||
- P200_PFS – Percent of individuals below 200% Federal Poverty Line (from CEJST source data).
|
||||
- CA_LT20 – Percent higher ed enrollment rate is less than 20% (from CEJST source data).
|
||||
- TractAcres – Acres of tract calculated from ALAND10 field (area land/meters) in 2010 census tracts.
|
||||
- CAVEAT: Some census tracts in the CEJST source file extend into open water. ALAND10 area was used to constrain percent calculations (e.g. cropland area) to land only.
|
||||
- AcresCrops – Acres crops calculated by summing all cells in the NLCD Cropland Data Layer crop classes.
|
||||
- PctCrops – Formula: AcresCrops/TractAcres*100.
|
||||
- PctImperv – Mean imperviousness for each census tract.
|
||||
- CAVEAT: Where tracts extend into open water, mean imperviousness may be underestimated.
|
||||
- __TO USE__ PctNatural – Formula: 100 – PctCrops – PctImperv.
|
||||
- PctNat90 – Tract in or below 10th percentile for PctNatural. 1 = True, 0 = False.
|
||||
- PctNatural 10th percentile = 28.6439%
|
||||
- ImpOrCrop – If tract >= 90th percentile for PctImperv OR PctCrops. 1 = True, 0 = False.
|
||||
- PctImperv 90th percentile = 67.4146 %
|
||||
- PctCrops 90th percentile = 27.8116 %
|
||||
- LowInAndEd – If tract >= 65th percentile for P200_PFS AND CA_LT20.
|
||||
- P200_PFS 65th percentile = 64.0%
|
||||
- NatureDep – ImpOrCrp = 1 AND LowInAndEd = 1.
|
||||
|
||||
We added `GEOID10_TRACT` before converting shapefile to csv.
|
||||
|
||||
## Instructions to recreate
|
||||
|
||||
### Creating Impervious plus Cropland Attributes for Census Tracts
|
||||
|
||||
The Cropland Data Layer and NLCD Impervious layer were too big to put on our OneDrive, but you can download them here:
|
||||
CDL: https://www.nass.usda.gov/Research_and_Science/Cropland/Release/datasets/2021_30m_cdls.zip
|
||||
Impervious: https://s3-us-west-2.amazonaws.com/mrlc/nlcd_2019_impervious_l48_20210604.zip
|
||||
|
||||
|
||||
#### Crops
|
||||
|
||||
Add an attribute called TractAcres (or similar) to the census tracts to hold a value representing acres covered by the census tract.
|
||||
Calculate the TractAcres field for each census tract by using the Calculate Geometry tool (set the Property to Area (geodesic), and the Units to Acres).
|
||||
From the Cropland Data Layer (CDL), extract only the pixels representing crops, using the Extract by Attributes tool in ArcGIS Spatial Analyst toolbox.
|
||||
a. The attribute table tells you the names of each type of land cover. Since the CDL also contains NLCD classes and empty classes, the actual crop classes must be extracted.
|
||||
From the crops-only raster extracted from the CDL, run the Reclassify tool to create a binary layer where all crops = 1, and everything else is Null.
|
||||
Run the Tabulate Area tool:
|
||||
a. Zone data = census tracts
|
||||
b. Input raster data = the binary crops layer
|
||||
c. This will produce a table with the square meters of crops in each census tract contained in an attribute called VALUE_1
|
||||
Run the Join Field tool to join the table to the census tracts, with the VALUE_1 field as the Transfer Field, to transfer the VALUE_1 field (square meters of crops) to the census tracts.
|
||||
Add a field to the census tracts called AcresCrops (or similar) to hold the acreage of crops in each census tract.
|
||||
Calculate the AcresCrops field by multiplying the VALUE_1 field by 0.000247105 to produce acres of crops in each census tracts.
|
||||
a. You can delete the VALUE_1 field.
|
||||
Add a field called PctCrops (or similar) to hold the percent of each census tract occupied by crops.
|
||||
Calculate the PctCrops field by dividing the AcresCrops field by the TractAcres field, and multiply by 100 to get the percent.
|
||||
Impervious
|
||||
|
||||
Run the Zonal Statistics as Table tool:
|
||||
a. Zone data = census tracts
|
||||
b. Input raster data = impervious data raster layer
|
||||
c. Statistics type = Mean
|
||||
d. This will produce a table with the percent of each census tract occupied by impervious surfaces, contained in an attribute called MEAN
|
||||
|
||||
Run the Join Field tool to join the table to the census tracts, with the MEAN field as the Transfer Field, to transfer the MEAN field (percent impervious) to the census tracts.
|
||||
|
||||
Add a field called PctImperv (or similar) to hold the percent impervious value.
|
||||
|
||||
Calculate the PctImperv field by setting it equal to the MEAN field.
|
||||
a. You can delete the MEAN field.
|
||||
Combine the Crops and Impervious Data
|
||||
|
||||
Open the census tracts attribute table and add a field called PctNatural (or similar). Calculate this field using this equation: 100 – PctCrops – PctImperv . This produces a value that tells you the percent of each census tract covered in natural land cover.
|
||||
|
||||
Define the census tracts that fall in the 90th percentile of non-natural land cover:
|
||||
a. Add a field called PctNat90 (or similar)
|
||||
b. Right-click on the PctNatural field, and click Sort Ascending (lowest PctNatural values on top)
|
||||
c. Select the top 10 percent of rows after the sort
|
||||
d. Click on Show Selected Records in the attribute table
|
||||
e. Calculate the PctNat90 field for the selected records = 1
|
||||
f. Clear the selection
|
||||
g. The rows that now have a value of 1 for PctNat90 are the most lacking for natural land cover, and can be symbolized accordingly in a map
|
|
@ -0,0 +1,77 @@
|
|||
# pylint: disable=unsubscriptable-object
|
||||
# pylint: disable=unsupported-assignment-operation
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class NatureDeprivedETL(ExtractTransformLoad):
|
||||
"""ETL class for the Nature Deprived Communities dataset"""
|
||||
|
||||
NAME = "nlcd_nature_deprived"
|
||||
SOURCE_URL = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/usa_conus_nat_dep__compiled_by_TPL.csv.zip"
|
||||
)
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False
|
||||
|
||||
# Output score variables (values set on datasets.yml) for linting purposes
|
||||
ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME: str
|
||||
TRACT_PERCENT_IMPERVIOUS_FIELD_NAME: str
|
||||
TRACT_PERCENT_NON_NATURAL_FIELD_NAME: str
|
||||
TRACT_PERCENT_CROPLAND_FIELD_NAME: str
|
||||
|
||||
def __init__(self):
|
||||
# define the full path for the input CSV file
|
||||
self.INPUT_CSV = (
|
||||
self.get_tmp_path() / "usa_conus_nat_dep__compiled_by_TPL.csv"
|
||||
)
|
||||
|
||||
# this is the main dataframe
|
||||
self.df: pd.DataFrame
|
||||
|
||||
# Start dataset-specific vars here
|
||||
self.PERCENT_NATURAL_FIELD_NAME = "PctNatural"
|
||||
self.PERCENT_IMPERVIOUS_FIELD_NAME = "PctImperv"
|
||||
self.PERCENT_CROPLAND_FIELD_NAME = "PctCrops"
|
||||
self.TRACT_ACRES_FIELD_NAME = "TractAcres"
|
||||
# In order to ensure that tracts with very small Acreage, we want to create an eligibility criterion
|
||||
# similar to agrivalue. Here, we are ensuring that a tract has at least 35 acres, or is above the 1st percentile
|
||||
# for area. This does indeed remove tracts from the 90th+ percentile later on
|
||||
self.TRACT_ACRES_LOWER_BOUND = 35
|
||||
|
||||
def transform(self) -> None:
|
||||
"""Reads the unzipped data file into memory and applies the following
|
||||
transformations to prepare it for the load() method:
|
||||
|
||||
- Renames columns as needed
|
||||
"""
|
||||
logger.info("Transforming NLCD Data")
|
||||
|
||||
df_ncld: pd.DataFrame = pd.read_csv(
|
||||
self.INPUT_CSV,
|
||||
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
df_ncld[self.ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME] = (
|
||||
df_ncld[self.TRACT_ACRES_FIELD_NAME] >= self.TRACT_ACRES_LOWER_BOUND
|
||||
)
|
||||
df_ncld[self.TRACT_PERCENT_NON_NATURAL_FIELD_NAME] = (
|
||||
100 - df_ncld[self.PERCENT_NATURAL_FIELD_NAME]
|
||||
)
|
||||
|
||||
# Assign the final df to the class' output_df for the load method with rename
|
||||
self.output_df = df_ncld.rename(
|
||||
columns={
|
||||
self.PERCENT_IMPERVIOUS_FIELD_NAME: self.TRACT_PERCENT_IMPERVIOUS_FIELD_NAME,
|
||||
self.PERCENT_CROPLAND_FIELD_NAME: self.TRACT_PERCENT_CROPLAND_FIELD_NAME,
|
||||
}
|
||||
)
|
|
@ -1,12 +1,11 @@
|
|||
import functools
|
||||
import pandas as pd
|
||||
|
||||
import pandas as pd
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import (
|
||||
get_module_logger,
|
||||
unzip_file_from_url,
|
||||
)
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
@ -19,6 +18,10 @@ class PersistentPovertyETL(ExtractTransformLoad):
|
|||
Codebook: `https://s4.ad.brown.edu/Projects/Diversity/Researcher/LTBDDload/Dfiles/codebooks.pdf`.
|
||||
"""
|
||||
|
||||
NAME = "persistent_poverty"
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
|
||||
def __init__(self):
|
||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "persistent_poverty"
|
||||
|
||||
|
@ -75,7 +78,7 @@ class PersistentPovertyETL(ExtractTransformLoad):
|
|||
def extract(self) -> None:
|
||||
logger.info("Starting to download 86MB persistent poverty file.")
|
||||
|
||||
unzipped_file_path = self.get_tmp_path() / "persistent_poverty"
|
||||
unzipped_file_path = self.get_tmp_path()
|
||||
|
||||
unzip_file_from_url(
|
||||
file_url=settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
|
@ -155,14 +158,4 @@ class PersistentPovertyETL(ExtractTransformLoad):
|
|||
)
|
||||
)
|
||||
|
||||
self.df = transformed_df
|
||||
|
||||
def load(self) -> None:
|
||||
logger.info("Saving persistent poverty data.")
|
||||
|
||||
# mkdir census
|
||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.df[self.COLUMNS_TO_KEEP].to_csv(
|
||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
||||
)
|
||||
self.output_df = transformed_df
|
||||
|
|
|
@ -1,18 +1,25 @@
|
|||
from pathlib import Path
|
||||
|
||||
import geopandas as gpd
|
||||
import pandas as pd
|
||||
|
||||
from data_pipeline.config import settings
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import unzip_file_from_url
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class TribalETL(ExtractTransformLoad):
|
||||
def __init__(self):
|
||||
self.GEOJSON_BASE_PATH = self.DATA_PATH / "tribal" / "geojson"
|
||||
self.GEOGRAPHIC_BASE_PATH = (
|
||||
self.DATA_PATH / "tribal" / "geographic_data"
|
||||
)
|
||||
self.CSV_BASE_PATH = self.DATA_PATH / "tribal" / "csv"
|
||||
self.NATIONAL_TRIBAL_GEOJSON_PATH = self.GEOJSON_BASE_PATH / "usa.json"
|
||||
self.NATIONAL_TRIBAL_GEOJSON_PATH = (
|
||||
self.GEOGRAPHIC_BASE_PATH / "usa.json"
|
||||
)
|
||||
self.USA_TRIBAL_DF_LIST = []
|
||||
|
||||
def extract(self) -> None:
|
||||
|
@ -23,43 +30,66 @@ class TribalETL(ExtractTransformLoad):
|
|||
"""
|
||||
logger.info("Downloading Tribal Data")
|
||||
|
||||
bia_geojson_url = "https://justice40-data.s3.amazonaws.com/data-sources/BIA_National_LAR_json.zip"
|
||||
alaska_geojson_url = "https://justice40-data.s3.amazonaws.com/data-sources/Alaska_Native_Villages_json.zip"
|
||||
bia_shapefile_zip_url = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/BIA_National_LAR_updated_20220929.zip"
|
||||
)
|
||||
|
||||
tsa_and_aian_geojson_zip_url = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/BIA_TSA_and_AIAN_json.zip"
|
||||
)
|
||||
|
||||
alaska_geojson_url = (
|
||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||
+ "/Alaska_Native_Villages_json.zip"
|
||||
)
|
||||
|
||||
unzip_file_from_url(
|
||||
bia_geojson_url,
|
||||
bia_shapefile_zip_url,
|
||||
self.TMP_PATH,
|
||||
self.DATA_PATH / "tribal" / "geojson" / "bia_national_lar",
|
||||
self.GEOGRAPHIC_BASE_PATH / "bia_national_lar",
|
||||
)
|
||||
|
||||
unzip_file_from_url(
|
||||
tsa_and_aian_geojson_zip_url,
|
||||
self.TMP_PATH,
|
||||
self.GEOGRAPHIC_BASE_PATH / "tsa_and_aian",
|
||||
)
|
||||
|
||||
unzip_file_from_url(
|
||||
alaska_geojson_url,
|
||||
self.TMP_PATH,
|
||||
self.DATA_PATH / "tribal" / "geojson" / "alaska_native_villages",
|
||||
self.GEOGRAPHIC_BASE_PATH / "alaska_native_villages",
|
||||
)
|
||||
pass
|
||||
|
||||
def _transform_bia_national_lar(self, tribal_geojson_path: Path) -> None:
|
||||
def _transform_bia_national_lar(self, path: Path) -> None:
|
||||
"""Transform the Tribal BIA National Lar Geodataframe and appends it to the
|
||||
national Tribal Dataframe List
|
||||
|
||||
Args:
|
||||
tribal_geojson_path (Path): the Path to the Tribal Geojson
|
||||
path (Path): the Path to the BIA National Lar
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
bia_national_lar_df = gpd.read_file(tribal_geojson_path)
|
||||
bia_national_lar_df = gpd.read_file(path)
|
||||
|
||||
# DELETE
|
||||
logger.info(f"Columns: {bia_national_lar_df.columns}\n")
|
||||
|
||||
bia_national_lar_df.drop(
|
||||
["OBJECTID", "GISAcres", "Shape_Length", "Shape_Area"],
|
||||
["GISAcres"],
|
||||
axis=1,
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
bia_national_lar_df.rename(
|
||||
columns={"TSAID": "tribalId", "LARName": "landAreaName"},
|
||||
columns={
|
||||
"LARID": field_names.TRIBAL_ID,
|
||||
"LARName": field_names.TRIBAL_LAND_AREA_NAME,
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
|
@ -87,7 +117,10 @@ class TribalETL(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
bia_aian_supplemental_df.rename(
|
||||
columns={"OBJECTID": "tribalId", "Land_Area_": "landAreaName"},
|
||||
columns={
|
||||
"OBJECTID": field_names.TRIBAL_ID,
|
||||
"Land_Area_": field_names.TRIBAL_LAND_AREA_NAME,
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
|
@ -113,7 +146,10 @@ class TribalETL(ExtractTransformLoad):
|
|||
)
|
||||
|
||||
bia_tsa_df.rename(
|
||||
columns={"TSAID": "tribalId", "LARName": "landAreaName"},
|
||||
columns={
|
||||
"TSAID": field_names.TRIBAL_ID,
|
||||
"LARName": field_names.TRIBAL_LAND_AREA_NAME,
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
|
@ -136,8 +172,8 @@ class TribalETL(ExtractTransformLoad):
|
|||
|
||||
alaska_native_villages_df.rename(
|
||||
columns={
|
||||
"GlobalID": "tribalId",
|
||||
"TRIBALOFFICENAME": "landAreaName",
|
||||
"GlobalID": field_names.TRIBAL_ID,
|
||||
"TRIBALOFFICENAME": field_names.TRIBAL_LAND_AREA_NAME,
|
||||
},
|
||||
inplace=True,
|
||||
)
|
||||
|
@ -152,27 +188,30 @@ class TribalETL(ExtractTransformLoad):
|
|||
"""
|
||||
logger.info("Transforming Tribal Data")
|
||||
|
||||
# load the geojsons
|
||||
bia_national_lar_geojson = (
|
||||
self.GEOJSON_BASE_PATH / "bia_national_lar" / "BIA_TSA.json"
|
||||
# Set the filepaths:
|
||||
bia_national_lar_shapefile = (
|
||||
self.GEOGRAPHIC_BASE_PATH / "bia_national_lar"
|
||||
)
|
||||
|
||||
bia_aian_supplemental_geojson = (
|
||||
self.GEOJSON_BASE_PATH
|
||||
/ "bia_national_lar"
|
||||
self.GEOGRAPHIC_BASE_PATH
|
||||
/ "tsa_and_aian"
|
||||
/ "BIA_AIAN_Supplemental.json"
|
||||
)
|
||||
bia_tsa_geojson_geojson = (
|
||||
self.GEOJSON_BASE_PATH / "bia_national_lar" / "BIA_TSA.json"
|
||||
|
||||
bia_tsa_geojson = (
|
||||
self.GEOGRAPHIC_BASE_PATH / "tsa_and_aian" / "BIA_TSA.json"
|
||||
)
|
||||
|
||||
alaska_native_villages_geojson = (
|
||||
self.GEOJSON_BASE_PATH
|
||||
self.GEOGRAPHIC_BASE_PATH
|
||||
/ "alaska_native_villages"
|
||||
/ "AlaskaNativeVillages.gdb.geojson"
|
||||
)
|
||||
|
||||
self._transform_bia_national_lar(bia_national_lar_geojson)
|
||||
self._transform_bia_national_lar(bia_national_lar_shapefile)
|
||||
self._transform_bia_aian_supplemental(bia_aian_supplemental_geojson)
|
||||
self._transform_bia_tsa(bia_tsa_geojson_geojson)
|
||||
self._transform_bia_tsa(bia_tsa_geojson)
|
||||
self._transform_alaska_native_villages(alaska_native_villages_geojson)
|
||||
|
||||
def load(self) -> None:
|
||||
|
@ -182,13 +221,13 @@ class TribalETL(ExtractTransformLoad):
|
|||
None
|
||||
"""
|
||||
logger.info("Saving Tribal GeoJson and CSV")
|
||||
|
||||
usa_tribal_df = gpd.GeoDataFrame(
|
||||
pd.concat(self.USA_TRIBAL_DF_LIST, ignore_index=True)
|
||||
)
|
||||
usa_tribal_df = usa_tribal_df.to_crs(
|
||||
"+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"
|
||||
)
|
||||
|
||||
logger.info("Writing national geojson file")
|
||||
usa_tribal_df.to_file(
|
||||
self.NATIONAL_TRIBAL_GEOJSON_PATH, driver="GeoJSON"
|
||||
|
|
|
@ -1,11 +1,8 @@
|
|||
from pathlib import Path
|
||||
|
||||
from data_pipeline.utils import (
|
||||
get_module_logger,
|
||||
remove_all_from_dir,
|
||||
remove_files_from_dir,
|
||||
)
|
||||
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.utils import remove_all_from_dir
|
||||
from data_pipeline.utils import remove_files_from_dir
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
|
|
@ -0,0 +1,274 @@
|
|||
from typing import Optional
|
||||
|
||||
import geopandas as gpd
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.etl.sources.geo_utils import add_tracts_for_geometries
|
||||
from data_pipeline.etl.sources.geo_utils import get_tract_geojson
|
||||
from data_pipeline.etl.sources.geo_utils import get_tribal_geojson
|
||||
from data_pipeline.score import field_names
|
||||
from data_pipeline.utils import get_module_logger
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class TribalOverlapETL(ExtractTransformLoad):
|
||||
"""Calculates the overlap between Census tracts and Tribal boundaries."""
|
||||
|
||||
# Metadata for the baseclass
|
||||
NAME = "tribal_overlap"
|
||||
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||
|
||||
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||
ALASKA_AND_HAWAII_EXPECTED_IN_DATA = True
|
||||
EXPECTED_MISSING_STATES = [
|
||||
# 15 is Hawaii, which has Hawaiian Home Lands, but they are not included in
|
||||
# this dataset.
|
||||
"15",
|
||||
# The following states do not have any federally recognized Tribes in this
|
||||
# dataset.
|
||||
"10",
|
||||
"11",
|
||||
"13",
|
||||
"17",
|
||||
"18",
|
||||
"21",
|
||||
"24",
|
||||
"33",
|
||||
"34",
|
||||
"39",
|
||||
"50",
|
||||
"51",
|
||||
"54",
|
||||
]
|
||||
|
||||
# A Tribal area that requires some special processing.
|
||||
ANNETTE_ISLAND_TRIBAL_NAME = "Annette Island"
|
||||
|
||||
CRS_INTEGER = 3857
|
||||
TRIBAL_OVERLAP_CUTOFF = 0.995 # Percentage of overlap that rounds to 100%
|
||||
|
||||
# Define these for easy code completion
|
||||
def __init__(self):
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK,
|
||||
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS,
|
||||
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT,
|
||||
field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT,
|
||||
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY,
|
||||
field_names.IS_TRIBAL_DAC,
|
||||
]
|
||||
|
||||
self.OVERALL_TRIBAL_COUNT = "OVERALL_TRIBAL_COUNT"
|
||||
self.output_df: pd.DataFrame
|
||||
self.census_tract_gdf: gpd.GeoDataFrame
|
||||
self.tribal_gdf: gpd.GeoDataFrame
|
||||
|
||||
@staticmethod
|
||||
def _create_string_from_list(series: pd.Series) -> str:
|
||||
"""Helper method that creates a sorted string list (for tribal names)."""
|
||||
str_list = series.tolist()
|
||||
str_list = sorted(str_list)
|
||||
return ", ".join(str_list)
|
||||
|
||||
@classmethod
|
||||
def _adjust_percentage_for_frontend(
|
||||
cls,
|
||||
percentage_float: float,
|
||||
) -> Optional[float]:
|
||||
"""Round numbers very close to 0 to 0 and very close to 1 to 1 for display"""
|
||||
if percentage_float is None:
|
||||
return None
|
||||
if percentage_float < 0.01:
|
||||
return 0.0
|
||||
if percentage_float > cls.TRIBAL_OVERLAP_CUTOFF:
|
||||
return 1.0
|
||||
|
||||
return percentage_float
|
||||
|
||||
def extract(self) -> None:
|
||||
self.census_tract_gdf = get_tract_geojson()
|
||||
self.tribal_gdf = get_tribal_geojson()
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting tribal overlap transforms.")
|
||||
|
||||
# First, calculate whether tracts include any areas from the Tribal areas,
|
||||
# for both the points in AK and the polygons in the continental US (CONUS).
|
||||
tribal_overlap_with_tracts = add_tracts_for_geometries(
|
||||
df=self.tribal_gdf, tract_data=self.census_tract_gdf
|
||||
)
|
||||
|
||||
# Cleanup the suffixes in the tribal names
|
||||
tribal_overlap_with_tracts[field_names.TRIBAL_LAND_AREA_NAME] = (
|
||||
tribal_overlap_with_tracts[field_names.TRIBAL_LAND_AREA_NAME]
|
||||
.str.replace(" LAR", "")
|
||||
.str.replace(" TSA", "")
|
||||
.str.replace(" IRA", "")
|
||||
.str.replace(" AK", "")
|
||||
)
|
||||
|
||||
tribal_overlap_with_tracts = tribal_overlap_with_tracts.groupby(
|
||||
[self.GEOID_TRACT_FIELD_NAME]
|
||||
).agg(
|
||||
{
|
||||
field_names.TRIBAL_ID: "count",
|
||||
field_names.TRIBAL_LAND_AREA_NAME: self._create_string_from_list,
|
||||
}
|
||||
)
|
||||
|
||||
tribal_overlap_with_tracts = tribal_overlap_with_tracts.reset_index()
|
||||
|
||||
tribal_overlap_with_tracts = tribal_overlap_with_tracts.rename(
|
||||
columns={
|
||||
field_names.TRIBAL_ID: self.OVERALL_TRIBAL_COUNT,
|
||||
field_names.TRIBAL_LAND_AREA_NAME: field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT,
|
||||
}
|
||||
)
|
||||
|
||||
# Second, calculate percentage overlap.
|
||||
# Drop the points from the Tribal data (because these cannot be joined to a
|
||||
# (Multi)Polygon tract data frame)
|
||||
tribal_gdf_without_points = self.tribal_gdf[
|
||||
self.tribal_gdf.geom_type.isin(["Polygon", "MultiPolygon"])
|
||||
]
|
||||
|
||||
# Switch from geographic to projected CRSes
|
||||
# because logically that's right
|
||||
self.census_tract_gdf = self.census_tract_gdf.to_crs(
|
||||
crs=self.CRS_INTEGER
|
||||
)
|
||||
tribal_gdf_without_points = tribal_gdf_without_points.to_crs(
|
||||
crs=self.CRS_INTEGER
|
||||
)
|
||||
|
||||
# Create a measure for the entire census tract area
|
||||
self.census_tract_gdf["area_tract"] = self.census_tract_gdf.area
|
||||
|
||||
# Performing overlay funcion
|
||||
# We have a mix of polygons and multipolygons, and we just want the overlaps
|
||||
# without caring a ton about the specific types, so we ignore geom type.
|
||||
# Realistically, this changes almost nothing in the calculation; True and False
|
||||
# are the same within 9 digits of precision
|
||||
gdf_joined = gpd.overlay(
|
||||
self.census_tract_gdf,
|
||||
tribal_gdf_without_points,
|
||||
how="intersection",
|
||||
keep_geom_type=False,
|
||||
)
|
||||
|
||||
# Calculating the areas of the newly-created overlapping geometries
|
||||
gdf_joined["area_joined"] = gdf_joined.area
|
||||
|
||||
# Calculating the areas of the newly-created geometries in relation
|
||||
# to the original tract geometries
|
||||
gdf_joined[field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT] = (
|
||||
gdf_joined["area_joined"] / gdf_joined["area_tract"]
|
||||
)
|
||||
|
||||
# Aggregate the results
|
||||
percentage_results = gdf_joined.groupby(
|
||||
[self.GEOID_TRACT_FIELD_NAME]
|
||||
).agg({field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT: "sum"})
|
||||
|
||||
percentage_results = percentage_results.reset_index()
|
||||
|
||||
# Merge the two results.
|
||||
merged_output_df = tribal_overlap_with_tracts.merge(
|
||||
right=percentage_results,
|
||||
how="outer",
|
||||
on=self.GEOID_TRACT_FIELD_NAME,
|
||||
)
|
||||
|
||||
# Finally, fix one unique error.
|
||||
# There is one unique Tribal area (self.ANNETTE_ISLAND_TRIBAL_NAME) that is a polygon in
|
||||
# Alaska. All other Tribal areas in Alaska are points.
|
||||
# For tracts that *only* contain that Tribal area, leave percentage as is.
|
||||
# For tracts that include that Tribal area AND Alaska Native villages,
|
||||
# null the percentage, because we cannot calculate the percent of the tract
|
||||
# this is within Tribal areas.
|
||||
|
||||
# Create state FIPS codes.
|
||||
merged_output_df_state_fips_code = merged_output_df[
|
||||
self.GEOID_TRACT_FIELD_NAME
|
||||
].str[0:2]
|
||||
|
||||
# Start by testing for Annette Island exception, to make sure data is as
|
||||
# expected
|
||||
alaskan_non_annette_matches = (
|
||||
# Data from Alaska
|
||||
(merged_output_df_state_fips_code == "02")
|
||||
# Where the Tribal areas do *not* include Annette
|
||||
& (
|
||||
~merged_output_df[
|
||||
field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT
|
||||
].str.contains(self.ANNETTE_ISLAND_TRIBAL_NAME)
|
||||
)
|
||||
# But somehow percentage is greater than zero.
|
||||
& (
|
||||
merged_output_df[field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT]
|
||||
> 0
|
||||
)
|
||||
)
|
||||
|
||||
# There should be none of these matches.
|
||||
if sum(alaskan_non_annette_matches) > 0:
|
||||
raise ValueError(
|
||||
"Data has changed. More than one Alaskan Tribal Area has polygon "
|
||||
"boundaries. You'll need to refactor this ETL. \n"
|
||||
f"Data:\n{merged_output_df[alaskan_non_annette_matches]}"
|
||||
)
|
||||
|
||||
# Now, fix the exception that is already known.
|
||||
merged_output_df[
|
||||
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT
|
||||
] = np.where(
|
||||
# For tracts inside Alaska
|
||||
(merged_output_df_state_fips_code == "02")
|
||||
# That are not only represented by Annette Island
|
||||
& (
|
||||
merged_output_df[field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT]
|
||||
!= self.ANNETTE_ISLAND_TRIBAL_NAME
|
||||
),
|
||||
# Set the value to `None` for tracts with more than just Annette.
|
||||
None,
|
||||
# Otherwise, set the value to what it was.
|
||||
merged_output_df[field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT],
|
||||
)
|
||||
|
||||
# Counting tribes in the lower 48 is different from counting in AK,
|
||||
# so per request by the design and frontend team, we remove all the
|
||||
# counts outside AK
|
||||
merged_output_df[
|
||||
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK
|
||||
] = np.where(
|
||||
# In Alaska
|
||||
(merged_output_df_state_fips_code == "02"),
|
||||
# Keep the counts
|
||||
merged_output_df[self.OVERALL_TRIBAL_COUNT],
|
||||
# Otherwise, null them
|
||||
None,
|
||||
)
|
||||
|
||||
# TODO: Count tribal areas in the lower 48 correctly
|
||||
merged_output_df[
|
||||
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS
|
||||
] = None
|
||||
|
||||
merged_output_df[field_names.IS_TRIBAL_DAC] = (
|
||||
merged_output_df[field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT]
|
||||
> self.TRIBAL_OVERLAP_CUTOFF
|
||||
)
|
||||
|
||||
# The very final thing we want to do is produce a string for the front end to show
|
||||
# We do this here so that all of the logic is included
|
||||
merged_output_df[
|
||||
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY
|
||||
] = merged_output_df[field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT].apply(
|
||||
self._adjust_percentage_for_frontend
|
||||
)
|
||||
|
||||
self.output_df = merged_output_df
|
112
data/data-pipeline/data_pipeline/etl/sources/us_army_fuds/etl.py
Normal file
112
data/data-pipeline/data_pipeline/etl/sources/us_army_fuds/etl.py
Normal file
|
@ -0,0 +1,112 @@
|
|||
from pathlib import Path
|
||||
|
||||
import geopandas as gpd
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from data_pipeline.etl.base import ExtractTransformLoad
|
||||
from data_pipeline.etl.base import ValidGeoLevel
|
||||
from data_pipeline.etl.sources.geo_utils import add_tracts_for_geometries
|
||||
from data_pipeline.utils import download_file_from_url
|
||||
from data_pipeline.utils import get_module_logger
|
||||
from data_pipeline.config import settings
|
||||
|
||||
logger = get_module_logger(__name__)
|
||||
|
||||
|
||||
class USArmyFUDS(ExtractTransformLoad):
|
||||
"""The Formerly Used Defense Sites (FUDS)"""
|
||||
|
||||
NAME: str = "us_army_fuds"
|
||||
|
||||
ELIGIBLE_FUDS_COUNT_FIELD_NAME: str
|
||||
INELIGIBLE_FUDS_COUNT_FIELD_NAME: str
|
||||
ELIGIBLE_FUDS_BINARY_FIELD_NAME: str
|
||||
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||
LOAD_YAML_CONFIG: bool = True
|
||||
|
||||
ISLAND_AREAS_EXPECTED_IN_DATA = True
|
||||
|
||||
def __init__(self):
|
||||
|
||||
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||
self.FILE_URL = (
|
||||
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||
"us_army_fuds/Formerly_Used_Defense_Sites_(FUDS)_"
|
||||
"all_data_reported_to_Congress_in_FY2020.geojson"
|
||||
)
|
||||
else:
|
||||
self.FILE_URL: str = (
|
||||
"https://opendata.arcgis.com/api/v3/datasets/"
|
||||
"3f8354667d5b4b1b8ad7a6e00c3cf3b1_1/downloads/"
|
||||
"data?format=geojson&spatialRefId=4326&where=1%3D1"
|
||||
)
|
||||
|
||||
self.OUTPUT_PATH: Path = self.DATA_PATH / "dataset" / "us_army_fuds"
|
||||
|
||||
# Constants for output
|
||||
self.COLUMNS_TO_KEEP = [
|
||||
self.GEOID_TRACT_FIELD_NAME,
|
||||
self.ELIGIBLE_FUDS_COUNT_FIELD_NAME,
|
||||
self.INELIGIBLE_FUDS_COUNT_FIELD_NAME,
|
||||
self.ELIGIBLE_FUDS_BINARY_FIELD_NAME,
|
||||
]
|
||||
self.DOWNLOAD_FILE_NAME = self.get_tmp_path() / "fuds.geojson"
|
||||
|
||||
self.raw_df: gpd.GeoDataFrame
|
||||
self.output_df: pd.DataFrame
|
||||
|
||||
def extract(self) -> None:
|
||||
logger.info("Starting FUDS data download.")
|
||||
|
||||
download_file_from_url(
|
||||
file_url=self.FILE_URL,
|
||||
download_file_name=self.DOWNLOAD_FILE_NAME,
|
||||
verify=True,
|
||||
)
|
||||
|
||||
def transform(self) -> None:
|
||||
logger.info("Starting FUDS transform.")
|
||||
# before we try to do any transformation, get the tract data
|
||||
# so it's loaded and the census ETL is out of scope
|
||||
|
||||
logger.info("Loading FUDS data as GeoDataFrame for transform")
|
||||
raw_df = gpd.read_file(
|
||||
filename=self.DOWNLOAD_FILE_NAME,
|
||||
low_memory=False,
|
||||
)
|
||||
|
||||
# Note that the length of raw_df will not be exactly the same
|
||||
# because same bases lack coordinated or have coordinates in
|
||||
# Mexico or in the ocean. See the following dataframe:
|
||||
# raw_df[~raw_df.OBJECTID.isin(df_with_tracts.OBJECTID)][
|
||||
# ['OBJECTID', 'CLOSESTCITY', 'COUNTY', 'ELIGIBILITY',
|
||||
# 'STATE', 'LATITUDE', "LONGITUDE"]]
|
||||
logger.debug("Adding tracts to FUDS data")
|
||||
df_with_tracts = add_tracts_for_geometries(raw_df)
|
||||
self.output_df = pd.DataFrame()
|
||||
|
||||
# this will create a boolean series which you can do actually sans np.where
|
||||
df_with_tracts["tmp_fuds"] = (
|
||||
df_with_tracts.ELIGIBILITY == "Eligible"
|
||||
) & (df_with_tracts.HASPROJECTS == "Yes")
|
||||
|
||||
self.output_df[
|
||||
self.ELIGIBLE_FUDS_COUNT_FIELD_NAME
|
||||
] = df_with_tracts.groupby(self.GEOID_TRACT_FIELD_NAME)[
|
||||
"tmp_fuds"
|
||||
].sum()
|
||||
|
||||
self.output_df[self.INELIGIBLE_FUDS_COUNT_FIELD_NAME] = (
|
||||
df_with_tracts[~df_with_tracts.tmp_fuds]
|
||||
.groupby(self.GEOID_TRACT_FIELD_NAME)
|
||||
.size()
|
||||
)
|
||||
self.output_df = (
|
||||
self.output_df.fillna(0).astype(np.int64).sort_index().reset_index()
|
||||
)
|
||||
|
||||
self.output_df[self.ELIGIBLE_FUDS_BINARY_FIELD_NAME] = np.where(
|
||||
self.output_df[self.ELIGIBLE_FUDS_COUNT_FIELD_NAME] > 0.0,
|
||||
True,
|
||||
False,
|
||||
)
|
Loading…
Add table
Add a link
Reference in a new issue