mirror of
https://github.com/DOI-DO/j40-cejst-2.git
synced 2025-02-21 09:11:26 -08:00
Backend release branch to main (#1822)
* Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
This commit is contained in:
parent
db75b8ae76
commit
b97e60bfbb
285 changed files with 20485 additions and 3880 deletions
1
.github/CODEOWNERS
vendored
1
.github/CODEOWNERS
vendored
|
@ -1,2 +1 @@
|
||||||
* @esfoobar-usds @vim-usds @emma-nechamkin @mattbowen-usds
|
* @esfoobar-usds @vim-usds @emma-nechamkin @mattbowen-usds
|
||||||
|
|
||||||
|
|
20
.github/ISSUE_TEMPLATE/dataset_request.yaml
vendored
20
.github/ISSUE_TEMPLATE/dataset_request.yaml
vendored
|
@ -56,7 +56,7 @@ body:
|
||||||
id: other-datasource-type
|
id: other-datasource-type
|
||||||
attributes:
|
attributes:
|
||||||
label: If "other" above, please specify
|
label: If "other" above, please specify
|
||||||
description: If you have selected "other" above, please give as much detail as you can as to where this data can live
|
description: If you have selected "other" above, please give as much detail as you can as to where this data can live
|
||||||
placeholder: www.example.com and www.example2.com
|
placeholder: www.example.com and www.example2.com
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
@ -110,14 +110,14 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
label: Known data quality issues
|
label: Known data quality issues
|
||||||
description: Please describe any data quality issues you know about with this dataset.
|
description: Please describe any data quality issues you know about with this dataset.
|
||||||
placeholder: Limited accuracy in rural areas, etc.
|
placeholder: Limited accuracy in rural areas, etc.
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
- type: input
|
- type: input
|
||||||
id: geographic-coverage
|
id: geographic-coverage
|
||||||
attributes:
|
attributes:
|
||||||
label: What is the geographic coverage (in percent) of this dataset
|
label: What is the geographic coverage (in percent) of this dataset
|
||||||
description: If known, provide an estimate of the coverage of this dataset vis-a-vis the full United States
|
description: If known, provide an estimate of the coverage of this dataset vis-a-vis the full United States
|
||||||
placeholder: 90%
|
placeholder: 90%
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
@ -126,9 +126,9 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
label: Description of geographic coverage estimate
|
label: Description of geographic coverage estimate
|
||||||
description: (If relevant) Please explain your reasoning behind the above estimate of geographic coverage
|
description: (If relevant) Please explain your reasoning behind the above estimate of geographic coverage
|
||||||
placeholder: e.g. Missing information for Puerto Rico, etc.
|
placeholder: e.g. Missing information for Puerto Rico, etc.
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
- type: input
|
- type: input
|
||||||
id: last-updated-date
|
id: last-updated-date
|
||||||
attributes:
|
attributes:
|
||||||
|
@ -151,10 +151,10 @@ body:
|
||||||
id: link-to-documentation
|
id: link-to-documentation
|
||||||
attributes:
|
attributes:
|
||||||
label: Link to more documentation
|
label: Link to more documentation
|
||||||
description: Please provide a link where one can find additional information
|
description: Please provide a link where one can find additional information
|
||||||
placeholder: www.example.com
|
placeholder: www.example.com
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
- type: dropdown
|
- type: dropdown
|
||||||
id: can-go-in-cloud
|
id: can-go-in-cloud
|
||||||
attributes:
|
attributes:
|
||||||
|
@ -167,8 +167,8 @@ body:
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: additional-information
|
id: additional-information
|
||||||
attributes:
|
attributes:
|
||||||
label: Additional Information
|
label: Additional Information
|
||||||
description: Please provide any additional information you have about this dataset
|
description: Please provide any additional information you have about this dataset
|
||||||
placeholder: e.g. Missing information for Puerto Rico, etc.
|
placeholder: e.g. Missing information for Puerto Rico, etc.
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
|
@ -19,4 +19,4 @@ Fixes # (issue number)
|
||||||
- [ ] My changes generate no new warnings
|
- [ ] My changes generate no new warnings
|
||||||
- [ ] I have added tests that prove my fix is effective or that my feature works
|
- [ ] I have added tests that prove my fix is effective or that my feature works
|
||||||
- [ ] New and existing unit tests pass locally with my changes
|
- [ ] New and existing unit tests pass locally with my changes
|
||||||
- [ ] Any dependent changes have been merged and published in downstream modules
|
- [ ] Any dependent changes have been merged and published in downstream modules
|
||||||
|
|
31
.github/workflows/create-score-version.yml
vendored
31
.github/workflows/create-score-version.yml
vendored
|
@ -5,11 +5,12 @@ on:
|
||||||
score_version:
|
score_version:
|
||||||
description: "Which version of the score are you generating?"
|
description: "Which version of the score are you generating?"
|
||||||
required: true
|
required: true
|
||||||
default: 'beta'
|
default: '1.0'
|
||||||
type: choice
|
type: choice
|
||||||
options:
|
options:
|
||||||
- beta
|
- beta
|
||||||
- 1.0
|
- 1.0
|
||||||
|
- test
|
||||||
|
|
||||||
env:
|
env:
|
||||||
CENSUS_API_KEY: ${{ secrets.CENSUS_API_KEY }}
|
CENSUS_API_KEY: ${{ secrets.CENSUS_API_KEY }}
|
||||||
|
@ -54,7 +55,6 @@ jobs:
|
||||||
aws-region: us-east-1
|
aws-region: us-east-1
|
||||||
- name: Install GDAL/ogr2ogr
|
- name: Install GDAL/ogr2ogr
|
||||||
run: |
|
run: |
|
||||||
sudo add-apt-repository ppa:ubuntugis/ppa
|
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
sudo apt-get -y install gdal-bin
|
sudo apt-get -y install gdal-bin
|
||||||
ogrinfo --version
|
ogrinfo --version
|
||||||
|
@ -64,15 +64,40 @@ jobs:
|
||||||
- name: Generate Score Post
|
- name: Generate Score Post
|
||||||
run: |
|
run: |
|
||||||
poetry run python3 data_pipeline/application.py generate-score-post -s aws
|
poetry run python3 data_pipeline/application.py generate-score-post -s aws
|
||||||
|
- name: Confirm we generated the version of the score we think we did
|
||||||
|
if: ${{ env.J40_VERSION_LABEL_STRING == '1.0' || env.J40_VERSION_LABEL_STRING == 'test' }}
|
||||||
|
run: |
|
||||||
|
grep "Identified as disadvantaged due to tribal overlap" data_pipeline/data/score/downloadable/* > /dev/null
|
||||||
|
- name: Confirm we generated the version of the score we think we did
|
||||||
|
if: ${{ env.J40_VERSION_LABEL_STRING == 'beta' }}
|
||||||
|
run: |
|
||||||
|
grep -v "Identified as disadvantaged due to tribal overlap" data_pipeline/data/score/downloadable/* > /dev/null
|
||||||
- name: Generate Score Geo
|
- name: Generate Score Geo
|
||||||
run: |
|
run: |
|
||||||
poetry run python3 data_pipeline/application.py geo-score
|
poetry run python3 data_pipeline/application.py geo-score
|
||||||
|
- name: Run smoketest for 1.0
|
||||||
|
if: ${{ env.J40_VERSION_LABEL_STRING == '1.0' || env.J40_VERSION_LABEL_STRING == 'test' }}
|
||||||
|
run: |
|
||||||
|
poetry run pytest data_pipeline/ -m smoketest
|
||||||
- name: Deploy Score to Geoplatform AWS
|
- name: Deploy Score to Geoplatform AWS
|
||||||
run: |
|
run: |
|
||||||
poetry run s4cmd put ./data_pipeline/data/score/csv/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/csv --recursive --force --API-ACL=public-read
|
poetry run s4cmd put ./data_pipeline/data/score/csv/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/csv --recursive --force --API-ACL=public-read
|
||||||
poetry run s4cmd put ./data_pipeline/files/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/downloadable --recursive --force --API-ACL=public-read
|
poetry run s4cmd put ./data_pipeline/files/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/downloadable --recursive --force --API-ACL=public-read
|
||||||
poetry run s4cmd put ./data_pipeline/data/score/downloadable/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/downloadable --recursive --force --API-ACL=public-read
|
poetry run s4cmd put ./data_pipeline/data/score/downloadable/ s3://justice40-data/data-versions/${{env.J40_VERSION_LABEL_STRING}}/data/score/downloadable --recursive --force --API-ACL=public-read
|
||||||
|
- name: Confirm we generated the version of the score we think we did
|
||||||
|
if: ${{ env.J40_VERSION_LABEL_STRING == '1.0' || env.J40_VERSION_LABEL_STRING == 'test' }}
|
||||||
|
run: |
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/1.0-shapefile-codebook.zip" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/1.0-communities.xlsx" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/1.0-communities.csv" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/1.0-shapefile-codebook.zip" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/cejst-technical-support-document.pdf" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/draft-communities-list.pdf" -s -f -I -o /dev/null
|
||||||
|
- name: Confirm we generated the version of the score we think we did
|
||||||
|
if: ${{ env.J40_VERSION_LABEL_STRING == 'beta' }}
|
||||||
|
run: |
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/beta/data/score/downloadable/beta-data-documentation.zip" -s -f -I -o /dev/null && \
|
||||||
|
curl "https://static-data-screeningtool.geoplatform.gov/data-versions/beta/data/score/downloadable/beta-shapefile-codebook.zip" -s -f -I -o /dev/null
|
||||||
- name: Set timezone for tippecanoe
|
- name: Set timezone for tippecanoe
|
||||||
uses: szenius/set-timezone@v1.0
|
uses: szenius/set-timezone@v1.0
|
||||||
with:
|
with:
|
||||||
|
|
7
.github/workflows/data-checks.yml
vendored
7
.github/workflows/data-checks.yml
vendored
|
@ -22,9 +22,11 @@ jobs:
|
||||||
# one execution of the tests per version listed above
|
# one execution of the tests per version listed above
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
- name: Set up Python ${{ matrix.python-version }}
|
- name: Set up Python ${{ matrix.python-version }}
|
||||||
uses: actions/setup-python@v2
|
uses: actions/setup-python@v4
|
||||||
with:
|
with:
|
||||||
python-version: ${{ matrix.python-version }}
|
python-version: ${{ matrix.python-version }}
|
||||||
|
- name: Upgrade wheel
|
||||||
|
run: pip install -U wheel
|
||||||
- name: Print variables to help debug
|
- name: Print variables to help debug
|
||||||
uses: hmarr/debug-action@v2
|
uses: hmarr/debug-action@v2
|
||||||
- name: Load cached Poetry installation
|
- name: Load cached Poetry installation
|
||||||
|
@ -39,6 +41,7 @@ jobs:
|
||||||
run: poetry show -v
|
run: poetry show -v
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
run: poetry install
|
run: poetry install
|
||||||
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
|
# TODO: investigate why caching layer started failing.
|
||||||
|
# if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
|
||||||
- name: Run tox
|
- name: Run tox
|
||||||
run: poetry run tox
|
run: poetry run tox
|
||||||
|
|
38
.github/workflows/deploy_be_staging.yml
vendored
38
.github/workflows/deploy_be_staging.yml
vendored
|
@ -25,9 +25,10 @@ jobs:
|
||||||
- name: Print variables to help debug
|
- name: Print variables to help debug
|
||||||
uses: hmarr/debug-action@v2
|
uses: hmarr/debug-action@v2
|
||||||
- name: Set up Python ${{ matrix.python-version }}
|
- name: Set up Python ${{ matrix.python-version }}
|
||||||
uses: actions/setup-python@v2
|
uses: actions/setup-python@v4
|
||||||
with:
|
with:
|
||||||
python-version: ${{ matrix.python-version }}
|
python-version: ${{ matrix.python-version }}
|
||||||
|
- run: pip install -U wheel
|
||||||
- name: Load cached Poetry installation
|
- name: Load cached Poetry installation
|
||||||
id: cached-poetry-dependencies
|
id: cached-poetry-dependencies
|
||||||
uses: actions/cache@v2
|
uses: actions/cache@v2
|
||||||
|
@ -35,9 +36,14 @@ jobs:
|
||||||
path: ~/.cache/pypoetry/virtualenvs
|
path: ~/.cache/pypoetry/virtualenvs
|
||||||
key: env-${{ runner.os }}-${{ matrix.python-version }}-${{ hashFiles('**/poetry.lock') }}-${{ hashFiles('.github/workflows/deploy_be_staging.yml') }}
|
key: env-${{ runner.os }}-${{ matrix.python-version }}-${{ hashFiles('**/poetry.lock') }}-${{ hashFiles('.github/workflows/deploy_be_staging.yml') }}
|
||||||
- name: Install poetry
|
- name: Install poetry
|
||||||
uses: snok/install-poetry@v1
|
uses: snok/install-poetry@v1.3.3
|
||||||
- name: Print Poetry settings
|
- name: Print Poetry settings
|
||||||
run: poetry show -v
|
run: poetry show -v
|
||||||
|
- name: Install GDAL/ogr2ogr
|
||||||
|
run: |
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get -y install gdal-bin
|
||||||
|
ogrinfo --version
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
run: poetry add s4cmd && poetry install
|
run: poetry add s4cmd && poetry install
|
||||||
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
|
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
|
||||||
|
@ -47,12 +53,21 @@ jobs:
|
||||||
aws-access-key-id: ${{ secrets.DATA_DEV_AWS_ACCESS_KEY_ID }}
|
aws-access-key-id: ${{ secrets.DATA_DEV_AWS_ACCESS_KEY_ID }}
|
||||||
aws-secret-access-key: ${{ secrets.DATA_DEV_AWS_SECRET_ACCESS_KEY }}
|
aws-secret-access-key: ${{ secrets.DATA_DEV_AWS_SECRET_ACCESS_KEY }}
|
||||||
aws-region: us-east-1
|
aws-region: us-east-1
|
||||||
|
- name: Download census geo data for later user
|
||||||
|
run: |
|
||||||
|
poetry run python3 data_pipeline/application.py pull-census-data -s aws
|
||||||
- name: Generate Score
|
- name: Generate Score
|
||||||
run: |
|
run: |
|
||||||
poetry run python3 data_pipeline/application.py score-full-run
|
poetry run python3 data_pipeline/application.py score-full-run
|
||||||
- name: Generate Score Post
|
- name: Generate Score Post
|
||||||
run: |
|
run: |
|
||||||
poetry run python3 data_pipeline/application.py generate-score-post -s aws
|
poetry run python3 data_pipeline/application.py generate-score-post
|
||||||
|
- name: Generate Score Geo
|
||||||
|
run: |
|
||||||
|
poetry run python3 data_pipeline/application.py geo-score
|
||||||
|
- name: Run Smoketests
|
||||||
|
run: |
|
||||||
|
poetry run pytest data_pipeline/ -m smoketest
|
||||||
- name: Deploy Score to Geoplatform AWS
|
- name: Deploy Score to Geoplatform AWS
|
||||||
run: |
|
run: |
|
||||||
poetry run s4cmd put ./data_pipeline/data/score/csv/ s3://justice40-data/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/csv --recursive --force --API-ACL=public-read
|
poetry run s4cmd put ./data_pipeline/data/score/csv/ s3://justice40-data/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/csv --recursive --force --API-ACL=public-read
|
||||||
|
@ -63,19 +78,13 @@ jobs:
|
||||||
with:
|
with:
|
||||||
# Deploy to S3 for the Staging URL
|
# Deploy to S3 for the Staging URL
|
||||||
message: |
|
message: |
|
||||||
** Score Deployed! **
|
** Score Deployed! **
|
||||||
Find it here:
|
Find it here:
|
||||||
- Score Full usa.csv: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/csv/full/usa.csv
|
- Score Full usa.csv: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/csv/full/usa.csv
|
||||||
- Download Zip Packet: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/downloadable/Screening_Tool_Data.zip
|
- Download Zip Packet: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/downloadable/Screening_Tool_Data.zip
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
repo-token-user-login: "github-actions[bot]"
|
repo-token-user-login: "github-actions[bot]"
|
||||||
allow-repeats: false
|
allow-repeats: false
|
||||||
- name: Install GDAL/ogr2ogr
|
|
||||||
run: |
|
|
||||||
sudo add-apt-repository ppa:ubuntugis/ppa
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get -y install gdal-bin
|
|
||||||
ogrinfo --version
|
|
||||||
- name: Set timezone for tippecanoe
|
- name: Set timezone for tippecanoe
|
||||||
uses: szenius/set-timezone@v1.0
|
uses: szenius/set-timezone@v1.0
|
||||||
with:
|
with:
|
||||||
|
@ -93,9 +102,6 @@ jobs:
|
||||||
mkdir -p /usr/local/bin
|
mkdir -p /usr/local/bin
|
||||||
cp tippecanoe /usr/local/bin/tippecanoe
|
cp tippecanoe /usr/local/bin/tippecanoe
|
||||||
tippecanoe -v
|
tippecanoe -v
|
||||||
- name: Generate Score Geo
|
|
||||||
run: |
|
|
||||||
poetry run python3 data_pipeline/application.py geo-score
|
|
||||||
- name: Generate Tiles
|
- name: Generate Tiles
|
||||||
run: |
|
run: |
|
||||||
poetry run python3 data_pipeline/application.py generate-map-tiles
|
poetry run python3 data_pipeline/application.py generate-map-tiles
|
||||||
|
@ -110,8 +116,8 @@ jobs:
|
||||||
with:
|
with:
|
||||||
# Deploy to S3 for the staging URL
|
# Deploy to S3 for the staging URL
|
||||||
message: |
|
message: |
|
||||||
** Map Deployed! **
|
** Map Deployed! **
|
||||||
Map with Staging Backend: https://screeningtool.geoplatform.gov/en/?flags=stage_hash=${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}
|
Map with Staging Backend: https://screeningtool.geoplatform.gov/en?flags=stage_hash=${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}
|
||||||
Find tiles here: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/tiles
|
Find tiles here: https://justice40-data.s3.amazonaws.com/data-pipeline-staging/${{env.PR_NUMBER}}/${{env.SHA_NUMBER}}/data/score/tiles
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
repo-token-user-login: "github-actions[bot]"
|
repo-token-user-login: "github-actions[bot]"
|
||||||
|
|
4
.github/workflows/deploy_fe_staging.yml
vendored
4
.github/workflows/deploy_fe_staging.yml
vendored
|
@ -108,9 +108,9 @@ jobs:
|
||||||
uses: mshick/add-pr-comment@v1
|
uses: mshick/add-pr-comment@v1
|
||||||
with:
|
with:
|
||||||
message: |
|
message: |
|
||||||
** 👋 Attention translators!! 👋 **
|
** 👋 Attention translators!! 👋 **
|
||||||
Copy changes have resulted in a new en.json file. Please download en.json file and send to translators: https://github.com/usds/justice40-tool/blob/${{env.COMMIT_HASH}}/client/src/intl/en.json
|
Copy changes have resulted in a new en.json file. Please download en.json file and send to translators: https://github.com/usds/justice40-tool/blob/${{env.COMMIT_HASH}}/client/src/intl/en.json
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
repo-token-user-login: "github-actions[bot]" # The user.login for temporary GitHub tokens
|
repo-token-user-login: "github-actions[bot]" # The user.login for temporary GitHub tokens
|
||||||
allow-repeats: true
|
allow-repeats: true
|
||||||
if: contains(steps.changed-files.outputs.modified_files, 'client/src/intl/en.json')
|
if: contains(steps.changed-files.outputs.modified_files, 'client/src/intl/en.json')
|
||||||
|
|
4
.github/workflows/e2e.yml
vendored
4
.github/workflows/e2e.yml
vendored
|
@ -10,7 +10,7 @@ on:
|
||||||
schedule:
|
schedule:
|
||||||
# runs tests every day at 12am ET (4am UTC):
|
# runs tests every day at 12am ET (4am UTC):
|
||||||
- cron: '0 4 * * *'
|
- cron: '0 4 * * *'
|
||||||
jobs:
|
jobs:
|
||||||
nightly:
|
nightly:
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-20.04
|
||||||
env:
|
env:
|
||||||
|
@ -25,4 +25,4 @@ jobs:
|
||||||
start: npm start
|
start: npm start
|
||||||
wait-on: 'http://localhost:8000'
|
wait-on: 'http://localhost:8000'
|
||||||
# To run only specific spec/tests:
|
# To run only specific spec/tests:
|
||||||
# spec: cypress/e2e/downloadPacket.spec.js
|
# spec: cypress/e2e/downloadPacket.spec.js
|
||||||
|
|
2
.github/workflows/generate-census.yml
vendored
2
.github/workflows/generate-census.yml
vendored
|
@ -1,5 +1,5 @@
|
||||||
name: Generate Census
|
name: Generate Census
|
||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
inputs:
|
inputs:
|
||||||
confirm-action:
|
confirm-action:
|
||||||
|
|
4
.github/workflows/main.yml
vendored
4
.github/workflows/main.yml
vendored
|
@ -9,11 +9,11 @@ on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
inputs:
|
inputs:
|
||||||
logLevel:
|
logLevel:
|
||||||
description: 'Log level'
|
description: 'Log level'
|
||||||
required: true
|
required: true
|
||||||
default: 'warning'
|
default: 'warning'
|
||||||
tags:
|
tags:
|
||||||
description: 'Ping test'
|
description: 'Ping test'
|
||||||
jobs:
|
jobs:
|
||||||
sitePingCheck:
|
sitePingCheck:
|
||||||
name: Slack Notification
|
name: Slack Notification
|
||||||
|
|
|
@ -13,4 +13,3 @@ Los mantenedores del proyecto tienen el derecho y la obligación de eliminar, ed
|
||||||
Los casos de abuso, acoso o de otro comportamiento inaceptable se pueden denunciar abriendo un problema o contactando con uno o más de los mantenedores del proyecto en justice40open@usds.gov.
|
Los casos de abuso, acoso o de otro comportamiento inaceptable se pueden denunciar abriendo un problema o contactando con uno o más de los mantenedores del proyecto en justice40open@usds.gov.
|
||||||
|
|
||||||
Este Código de conducta es una adaptación de la versión 1.0.0 del Convenio del colaborador ([Contributor Covenant](http://contributor-covenant.org), *en inglés*) disponible en el sitio http://contributor-covenant.org/version/1/0/0/ *(en inglés)*.
|
Este Código de conducta es una adaptación de la versión 1.0.0 del Convenio del colaborador ([Contributor Covenant](http://contributor-covenant.org), *en inglés*) disponible en el sitio http://contributor-covenant.org/version/1/0/0/ *(en inglés)*.
|
||||||
|
|
||||||
|
|
|
@ -12,4 +12,4 @@ Project maintainers have the right and responsibility to remove, edit, or reject
|
||||||
|
|
||||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers at justice40open@usds.gov.
|
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers at justice40open@usds.gov.
|
||||||
|
|
||||||
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.0.0, available at [http://contributor-covenant.org/version/1/0/0/](http://contributor-covenant.org/version/1/0/0/)
|
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.0.0, available at [http://contributor-covenant.org/version/1/0/0/](http://contributor-covenant.org/version/1/0/0/)
|
||||||
|
|
|
@ -32,4 +32,4 @@ When participating in Justice40 open source community conversations and spaces,
|
||||||
- Take space and give space. We strive to create an equitable environment in which all are welcome and able to participate. We hope individuals feel comfortable voicing their opinions and providing contributions and will do our best to recognize and make space for individuals who may be struggling to find space here. Likewise, we expect individuals to recognize when they are taking up significant space and take a step back to allow room for others.
|
- Take space and give space. We strive to create an equitable environment in which all are welcome and able to participate. We hope individuals feel comfortable voicing their opinions and providing contributions and will do our best to recognize and make space for individuals who may be struggling to find space here. Likewise, we expect individuals to recognize when they are taking up significant space and take a step back to allow room for others.
|
||||||
- Be present when joining synchronous conversations such as our community chat. Why be here if you're not going to _be here_?
|
- Be present when joining synchronous conversations such as our community chat. Why be here if you're not going to _be here_?
|
||||||
- Be respectful.
|
- Be respectful.
|
||||||
- Default to positive. Assume others' contributions are legitimate and valuable and that they are made with good intention.
|
- Default to positive. Assume others' contributions are legitimate and valuable and that they are made with good intention.
|
||||||
|
|
|
@ -43,4 +43,3 @@ Si desea colaborar con alguna parte del código base, bifurque el repositorio si
|
||||||
* Al menos un revisor autorizado debe aprobar la confirmación (en [CODEOWNERS](https://github.com/usds/justice40-tool/tree/main/.github/CODEOWNERS), en inglés, consulte la lista más reciente de estos revisores).
|
* Al menos un revisor autorizado debe aprobar la confirmación (en [CODEOWNERS](https://github.com/usds/justice40-tool/tree/main/.github/CODEOWNERS), en inglés, consulte la lista más reciente de estos revisores).
|
||||||
* Todas las verificaciones de estado obligatorias deben ser aprobadas.
|
* Todas las verificaciones de estado obligatorias deben ser aprobadas.
|
||||||
Si hay un desacuerdo importante entre los integrantes del equipo, se organizará una reunión con el fin de determinar el plan de acción para la solicitud de incorporación de cambios.
|
Si hay un desacuerdo importante entre los integrantes del equipo, se organizará una reunión con el fin de determinar el plan de acción para la solicitud de incorporación de cambios.
|
||||||
|
|
||||||
|
|
|
@ -36,7 +36,7 @@ Homebrew is an easy way to manage software downloads on MacOS. You don't _have_
|
||||||
|
|
||||||
You should regularly run `brew update` and `brew doctor` to make sure your packages are up to date and in good condition.
|
You should regularly run `brew update` and `brew doctor` to make sure your packages are up to date and in good condition.
|
||||||
|
|
||||||
### Install Node using NVM
|
### Install Node using NVM
|
||||||
|
|
||||||
This will work for both MacOS and Win10. Follow instructions on this [link](https://medium.com/@nodesource/installing-node-js-tutorial-using-nvm-5c6ff5925dd8). Be sure to read through the whole doc to find the sections within each step relevant to you (e.g. if you're using Homebrew, when you get to Step 2 look for the section, "Install NVM with Homebrew").
|
This will work for both MacOS and Win10. Follow instructions on this [link](https://medium.com/@nodesource/installing-node-js-tutorial-using-nvm-5c6ff5925dd8). Be sure to read through the whole doc to find the sections within each step relevant to you (e.g. if you're using Homebrew, when you get to Step 2 look for the section, "Install NVM with Homebrew").
|
||||||
|
|
||||||
|
@ -54,7 +54,7 @@ You should then be able to switch to that version of node by:
|
||||||
|
|
||||||
To validate you are using node 14, type:
|
To validate you are using node 14, type:
|
||||||
|
|
||||||
`node -v`
|
`node -v`
|
||||||
|
|
||||||
This should return *Now using node 14.x.x (npm v6.x.x)*
|
This should return *Now using node 14.x.x (npm v6.x.x)*
|
||||||
|
|
||||||
|
|
|
@ -28,4 +28,3 @@ Por estos u otros propósitos y motivos, y sin ninguna expectativa de otra consi
|
||||||
c. El Afirmante excluye la responsabilidad de los derechos de compensación de otras personas que se puedan aplicar a la Obra o a cualquier uso de esta, incluidos, entre otros, los Derechos de Autor y Derechos Conexos de cualquier persona sobre la Obra. Además, el Afirmante excluye la responsabilidad de obtener los consentimientos o permisos u otros derechos necesarios que se exijan para cualquier uso de la Obra.
|
c. El Afirmante excluye la responsabilidad de los derechos de compensación de otras personas que se puedan aplicar a la Obra o a cualquier uso de esta, incluidos, entre otros, los Derechos de Autor y Derechos Conexos de cualquier persona sobre la Obra. Además, el Afirmante excluye la responsabilidad de obtener los consentimientos o permisos u otros derechos necesarios que se exijan para cualquier uso de la Obra.
|
||||||
|
|
||||||
d. El Afirmante entiende y reconoce que Creative Commons no es una parte en este documento y que no tiene ningún derecho u obligación con respecto a esta CC0 o al uso de la Obra.
|
d. El Afirmante entiende y reconoce que Creative Commons no es una parte en este documento y que no tiene ningún derecho u obligación con respecto a esta CC0 o al uso de la Obra.
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
[](https://github.com/usds/justice40-tool/blob/main/LICENSE.md)
|
[](https://github.com/usds/justice40-tool/blob/main/LICENSE.md)
|
||||||
|
|
||||||
*[Read this in English!](README.md)*
|
*[Read this in English!](README.md)*
|
||||||
|
|
||||||
Le damos la bienvenida a la comunidad de código abierto de Justice40. Este repositorio contiene el código, los procesos y la documentación que activa los datos y la tecnología de la Herramienta Justice40 para la Vigilancia del Clima y la Justicia Económica (CEJST, por sus siglas en inglés).
|
Le damos la bienvenida a la comunidad de código abierto de Justice40. Este repositorio contiene el código, los procesos y la documentación que activa los datos y la tecnología de la Herramienta Justice40 para la Vigilancia del Clima y la Justicia Económica (CEJST, por sus siglas en inglés).
|
||||||
|
|
||||||
## Antecedentes
|
## Antecedentes
|
||||||
|
@ -36,7 +36,7 @@ El equipo central usa el grupo para publicar la información más reciente sobre
|
||||||
Las colaboraciones son siempre bien recibidas. Nos agradan las aportaciones en forma de conversación sobre los temas de este repositorio y las solicitudes para incorporación de cambios en la documentación y el código.
|
Las colaboraciones son siempre bien recibidas. Nos agradan las aportaciones en forma de conversación sobre los temas de este repositorio y las solicitudes para incorporación de cambios en la documentación y el código.
|
||||||
En [CONTRIBUTING-es.md](CONTRIBUTING-es.md), consulte la manera de empezar a participar.
|
En [CONTRIBUTING-es.md](CONTRIBUTING-es.md), consulte la manera de empezar a participar.
|
||||||
|
|
||||||
## Instalación
|
## Instalación
|
||||||
|
|
||||||
La instalación es una instalación típica de gatsby y los detalles se pueden encontrar en [INSTALLATION-es.md](INSTALLATION-es.md)
|
La instalación es una instalación típica de gatsby y los detalles se pueden encontrar en [INSTALLATION-es.md](INSTALLATION-es.md)
|
||||||
|
|
||||||
|
|
14
README.md
14
README.md
|
@ -11,19 +11,19 @@ The Justice40 initiative and screening tool were announced in an [Executive Orde
|
||||||
Please see our [Open Source Community Orientation](docs/Justice40_Open_Source_Community_Orientation.pptx) deck for more information on the Justice40 initiative, our team, this project, and ways to participate.
|
Please see our [Open Source Community Orientation](docs/Justice40_Open_Source_Community_Orientation.pptx) deck for more information on the Justice40 initiative, our team, this project, and ways to participate.
|
||||||
|
|
||||||
## Core team
|
## Core team
|
||||||
The core Justice40 team building this tool is a small group of designers, developers, and product managers from the US Digital Service in partnership with the Council on Environmental Quality (CEQ).
|
The core Justice40 team building this tool is a small group of designers, developers, and product managers from the US Digital Service in partnership with the Council on Environmental Quality (CEQ).
|
||||||
|
|
||||||
An up-to-date list of core team members can be found in [MAINTAINERS.md](MAINTAINERS.md). The engineering members of the core team who maintain the code in this repo are listed in [.github/CODEOWNERS](.github/CODEOWNERS).
|
An up-to-date list of core team members can be found in [MAINTAINERS.md](MAINTAINERS.md). The engineering members of the core team who maintain the code in this repo are listed in [.github/CODEOWNERS](.github/CODEOWNERS).
|
||||||
|
|
||||||
## Community
|
## Community
|
||||||
The Justice40 team is taking a community-first and open source approach to the product development of this tool. We believe government software should be made in the open and be built and licensed such that anyone can take the code, run it themselves without paying money to third parties or using proprietary software, and use it as they will.
|
The Justice40 team is taking a community-first and open source approach to the product development of this tool. We believe government software should be made in the open and be built and licensed such that anyone can take the code, run it themselves without paying money to third parties or using proprietary software, and use it as they will.
|
||||||
|
|
||||||
We know that we can learn from a wide variety of communities, including those who will use or will be impacted by the tool, who are experts in data science or technology, or who have experience in climate, economic,or environmental justice work. We are dedicated to creating forums for continuous conversation and feedback to help shape the design and development of the tool.
|
We know that we can learn from a wide variety of communities, including those who will use or will be impacted by the tool, who are experts in data science or technology, or who have experience in climate, economic,or environmental justice work. We are dedicated to creating forums for continuous conversation and feedback to help shape the design and development of the tool.
|
||||||
|
|
||||||
We also recognize capacity building as a key part of involving a diverse open source community. We are doing our best to use accessible language, provide technical and process documents in multiple languages, and offer support to our community members of a wide variety of backgrounds and skillsets, directly or in the form of group chats and training. If you have ideas for how we can improve or add to our capacity building efforts and methods for welcoming folks into our community, please let us know in the [Google Group](https://groups.google.com/u/4/g/justice40-open-source) or email us at justice40open@usds.gov.
|
We also recognize capacity building as a key part of involving a diverse open source community. We are doing our best to use accessible language, provide technical and process documents in multiple languages, and offer support to our community members of a wide variety of backgrounds and skillsets, directly or in the form of group chats and training. If you have ideas for how we can improve or add to our capacity building efforts and methods for welcoming folks into our community, please let us know in the [Google Group](https://groups.google.com/u/4/g/justice40-open-source) or email us at justice40open@usds.gov.
|
||||||
|
|
||||||
### Community Guidelines
|
### Community Guidelines
|
||||||
Principles and guidelines for participating in our open source community are available [here](COMMUNITY_GUIDELINES.md). Please read them before joining or starting a conversation in this repo or one of the channels listed below.
|
Principles and guidelines for participating in our open source community are available [here](COMMUNITY_GUIDELINES.md). Please read them before joining or starting a conversation in this repo or one of the channels listed below.
|
||||||
|
|
||||||
### Community Chats
|
### Community Chats
|
||||||
We host open source community chats every third Monday of the month at 5-6pm ET. You can find information about the agenda and how to participate in our [Google Group](https://groups.google.com/u/4/g/justice40-open-source).
|
We host open source community chats every third Monday of the month at 5-6pm ET. You can find information about the agenda and how to participate in our [Google Group](https://groups.google.com/u/4/g/justice40-open-source).
|
||||||
|
@ -31,15 +31,15 @@ We host open source community chats every third Monday of the month at 5-6pm ET.
|
||||||
Community members are welcome to share updates or propose topics for discussion in community chats. Please do so in the Google Group.
|
Community members are welcome to share updates or propose topics for discussion in community chats. Please do so in the Google Group.
|
||||||
|
|
||||||
### Google Group
|
### Google Group
|
||||||
Our [Google Group](https://groups.google.com/u/4/g/justice40-open-source) is open to anyone to join and share their knowledge or experiences, as well as to ask questions of the core Justice40 team or the wider community.
|
Our [Google Group](https://groups.google.com/u/4/g/justice40-open-source) is open to anyone to join and share their knowledge or experiences, as well as to ask questions of the core Justice40 team or the wider community.
|
||||||
|
|
||||||
The core team uses the group to post updates on the program and tech/data issues, and to share the agenda and call for community participation in the community chat.
|
The core team uses the group to post updates on the program and tech/data issues, and to share the agenda and call for community participation in the community chat.
|
||||||
|
|
||||||
Curious about whether to ask a question here as a Github issue or in the Google Group? The general rule of thumb is that issues are for actionable topics related to the tool or data itself (e.g. questions about a specific data set in use, or suggestion for a new tool feature), and the Google Group is for more general topics or questions. If you can't decide, use the google group and we'll discuss it there before moving to Github if appropriate!
|
Curious about whether to ask a question here as a Github issue or in the Google Group? The general rule of thumb is that issues are for actionable topics related to the tool or data itself (e.g. questions about a specific data set in use, or suggestion for a new tool feature), and the Google Group is for more general topics or questions. If you can't decide, use the google group and we'll discuss it there before moving to Github if appropriate!
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Contributions are always welcome! We encourage contributions in the form of discussion on issues in this repo and pull requests of documentation and code.
|
Contributions are always welcome! We encourage contributions in the form of discussion on issues in this repo and pull requests of documentation and code.
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for ways to get started.
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for ways to get started.
|
||||||
|
|
||||||
|
|
38
data/data-pipeline/.pre-commit-config.yaml
Normal file
38
data/data-pipeline/.pre-commit-config.yaml
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
exclude: ^client|\.csv
|
||||||
|
repos:
|
||||||
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
|
rev: v4.3.0
|
||||||
|
hooks:
|
||||||
|
- id: end-of-file-fixer
|
||||||
|
- id: trailing-whitespace
|
||||||
|
|
||||||
|
- repo: https://github.com/lucasmbrown/mirrors-autoflake
|
||||||
|
rev: v1.3
|
||||||
|
hooks:
|
||||||
|
- id: autoflake
|
||||||
|
args:
|
||||||
|
[
|
||||||
|
"--in-place",
|
||||||
|
"--remove-all-unused-imports",
|
||||||
|
"--remove-unused-variable",
|
||||||
|
"--ignore-init-module-imports",
|
||||||
|
]
|
||||||
|
|
||||||
|
- repo: https://github.com/pycqa/isort
|
||||||
|
rev: 5.10.1
|
||||||
|
hooks:
|
||||||
|
- id: isort
|
||||||
|
name: isort (python)
|
||||||
|
args:
|
||||||
|
[
|
||||||
|
"--force-single-line-imports",
|
||||||
|
"--profile=black",
|
||||||
|
"--line-length=80",
|
||||||
|
"--src-path=.:data/data-pipeline"
|
||||||
|
]
|
||||||
|
|
||||||
|
- repo: https://github.com/ambv/black
|
||||||
|
rev: 22.8.0
|
||||||
|
hooks:
|
||||||
|
- id: black
|
||||||
|
args: [--config=./data/data-pipeline/pyproject.toml]
|
|
@ -1,7 +1,9 @@
|
||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
|
ENV TZ=America/Los_Angeles
|
||||||
|
|
||||||
# Install packages
|
# Install packages
|
||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update && TZ=America/Los_Angeles DEBIAN_FRONTEND=noninteractive apt-get install -y \
|
||||||
build-essential \
|
build-essential \
|
||||||
make \
|
make \
|
||||||
gcc \
|
gcc \
|
||||||
|
@ -9,10 +11,10 @@ RUN apt-get update && apt-get install -y \
|
||||||
unzip \
|
unzip \
|
||||||
wget \
|
wget \
|
||||||
python3-dev \
|
python3-dev \
|
||||||
python3-pip
|
python3-pip \
|
||||||
|
gdal-bin
|
||||||
|
|
||||||
# tippeanoe
|
# tippeanoe
|
||||||
ENV TZ=America/Los_Angeles
|
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
RUN apt-get install -y software-properties-common libsqlite3-dev zlib1g-dev
|
RUN apt-get install -y software-properties-common libsqlite3-dev zlib1g-dev
|
||||||
RUN apt-add-repository -y ppa:git-core/ppa
|
RUN apt-add-repository -y ppa:git-core/ppa
|
||||||
|
@ -35,7 +37,7 @@ ENV PYTHONFAULTHANDLER=1 \
|
||||||
POETRY_VERSION=1.1.12
|
POETRY_VERSION=1.1.12
|
||||||
|
|
||||||
WORKDIR /data-pipeline
|
WORKDIR /data-pipeline
|
||||||
COPY pyproject.toml /data-pipeline/
|
COPY . /data-pipeline
|
||||||
|
|
||||||
RUN pip install "poetry==$POETRY_VERSION"
|
RUN pip install "poetry==$POETRY_VERSION"
|
||||||
RUN poetry config virtualenvs.create false \
|
RUN poetry config virtualenvs.create false \
|
||||||
|
@ -43,6 +45,5 @@ RUN poetry config virtualenvs.create false \
|
||||||
&& poetry install --no-dev --no-interaction --no-ansi
|
&& poetry install --no-dev --no-interaction --no-ansi
|
||||||
|
|
||||||
# Copy all project files into the container
|
# Copy all project files into the container
|
||||||
COPY . /data-pipeline
|
|
||||||
|
|
||||||
CMD python3 -m data_pipeline.application data-full-run --check -s aws
|
CMD python3 -m data_pipeline.application data-full-run --check -s aws
|
||||||
|
|
|
@ -12,11 +12,14 @@
|
||||||
- [2. Extract-Transform-Load (ETL) the data](#2-extract-transform-load-etl-the-data)
|
- [2. Extract-Transform-Load (ETL) the data](#2-extract-transform-load-etl-the-data)
|
||||||
- [3. Combined dataset](#3-combined-dataset)
|
- [3. Combined dataset](#3-combined-dataset)
|
||||||
- [4. Tileset](#4-tileset)
|
- [4. Tileset](#4-tileset)
|
||||||
|
- [5. Shapefiles](#5-shapefiles)
|
||||||
- [Score generation and comparison workflow](#score-generation-and-comparison-workflow)
|
- [Score generation and comparison workflow](#score-generation-and-comparison-workflow)
|
||||||
- [Workflow Diagram](#workflow-diagram)
|
- [Workflow Diagram](#workflow-diagram)
|
||||||
- [Step 0: Set up your environment](#step-0-set-up-your-environment)
|
- [Step 0: Set up your environment](#step-0-set-up-your-environment)
|
||||||
- [Step 1: Run the script to download census data or download from the Justice40 S3 URL](#step-1-run-the-script-to-download-census-data-or-download-from-the-justice40-s3-url)
|
- [Step 1: Run the script to download census data or download from the Justice40 S3 URL](#step-1-run-the-script-to-download-census-data-or-download-from-the-justice40-s3-url)
|
||||||
- [Step 2: Run the ETL script for each data source](#step-2-run-the-etl-script-for-each-data-source)
|
- [Step 2: Run the ETL script for each data source](#step-2-run-the-etl-script-for-each-data-source)
|
||||||
|
- [Table of commands](#table-of-commands)
|
||||||
|
- [ETL steps](#etl-steps)
|
||||||
- [Step 3: Calculate the Justice40 score experiments](#step-3-calculate-the-justice40-score-experiments)
|
- [Step 3: Calculate the Justice40 score experiments](#step-3-calculate-the-justice40-score-experiments)
|
||||||
- [Step 4: Compare the Justice40 score experiments to other indices](#step-4-compare-the-justice40-score-experiments-to-other-indices)
|
- [Step 4: Compare the Justice40 score experiments to other indices](#step-4-compare-the-justice40-score-experiments-to-other-indices)
|
||||||
- [Data Sources](#data-sources)
|
- [Data Sources](#data-sources)
|
||||||
|
@ -26,21 +29,27 @@
|
||||||
- [MacOS](#macos)
|
- [MacOS](#macos)
|
||||||
- [Windows Users](#windows-users)
|
- [Windows Users](#windows-users)
|
||||||
- [Setting up Poetry](#setting-up-poetry)
|
- [Setting up Poetry](#setting-up-poetry)
|
||||||
- [Downloading Census Block Groups GeoJSON and Generating CBG CSVs](#downloading-census-block-groups-geojson-and-generating-cbg-csvs)
|
- [Running tox](#running-tox)
|
||||||
|
- [The Application entrypoint](#the-application-entrypoint)
|
||||||
|
- [Downloading Census Block Groups GeoJSON and Generating CBG CSVs (not normally required)](#downloading-census-block-groups-geojson-and-generating-cbg-csvs-not-normally-required)
|
||||||
|
- [Run all ETL, score and map generation processes](#run-all-etl-score-and-map-generation-processes)
|
||||||
|
- [Run both ETL and score generation processes](#run-both-etl-and-score-generation-processes)
|
||||||
|
- [Run all ETL processes](#run-all-etl-processes)
|
||||||
- [Generating Map Tiles](#generating-map-tiles)
|
- [Generating Map Tiles](#generating-map-tiles)
|
||||||
- [Serve the map locally](#serve-the-map-locally)
|
- [Serve the map locally](#serve-the-map-locally)
|
||||||
- [Running Jupyter notebooks](#running-jupyter-notebooks)
|
- [Running Jupyter notebooks](#running-jupyter-notebooks)
|
||||||
- [Activating variable-enabled Markdown for Jupyter notebooks](#activating-variable-enabled-markdown-for-jupyter-notebooks)
|
- [Activating variable-enabled Markdown for Jupyter notebooks](#activating-variable-enabled-markdown-for-jupyter-notebooks)
|
||||||
- [Miscellaneous](#miscellaneous)
|
|
||||||
- [Testing](#testing)
|
- [Testing](#testing)
|
||||||
- [Background](#background)
|
- [Background](#background)
|
||||||
- [Configuration / Fixtures](#configuration--fixtures)
|
- [Score and post-processing tests](#score-and-post-processing-tests)
|
||||||
- [Updating Pickles](#updating-pickles)
|
- [Updating Pickles](#updating-pickles)
|
||||||
- [Future Enchancements](#future-enchancements)
|
- [Future Enhancements](#future-enhancements)
|
||||||
- [ETL Unit Tests](#etl-unit-tests)
|
- [Fixtures used in ETL "snapshot tests"](#fixtures-used-in-etl-snapshot-tests)
|
||||||
|
- [Other ETL Unit Tests](#other-etl-unit-tests)
|
||||||
- [Extract Tests](#extract-tests)
|
- [Extract Tests](#extract-tests)
|
||||||
- [Transform Tests](#transform-tests)
|
- [Transform Tests](#transform-tests)
|
||||||
- [Load Tests](#load-tests)
|
- [Load Tests](#load-tests)
|
||||||
|
- [Smoketests](#smoketests)
|
||||||
|
|
||||||
<!-- /TOC -->
|
<!-- /TOC -->
|
||||||
|
|
||||||
|
@ -196,7 +205,7 @@ Here's a list of commands:
|
||||||
|
|
||||||
## Local development
|
## Local development
|
||||||
|
|
||||||
You can run the Python code locally without Docker to develop, using Poetry. However, to generate the census data you will need the [GDAL library](https://github.com/OSGeo/gdal) installed locally. Also to generate tiles for a local map, you will need [Mapbox tippecanoe](https://github.com/mapbox/tippecanoe). Please refer to the repos for specific instructions for your OS.
|
You can run the Python code locally without Docker to develop, using Poetry. However, to generate the census data you will need the [GDAL library](https://github.com/OSGeo/gdal) installed locally. For score generation, you will need [libspatialindex](https://libspatialindex.org/en/latest/). And to generate tiles for a local map, you will need [Mapbox tippecanoe](https://github.com/mapbox/tippecanoe). Please refer to the repos for specific instructions for your OS.
|
||||||
|
|
||||||
### VSCode
|
### VSCode
|
||||||
|
|
||||||
|
@ -218,6 +227,7 @@ To install the above-named executables:
|
||||||
|
|
||||||
- gdal: `brew install gdal`
|
- gdal: `brew install gdal`
|
||||||
- Tippecanoe: `brew install tippecanoe`
|
- Tippecanoe: `brew install tippecanoe`
|
||||||
|
- spatialindex: `brew install spatialindex`
|
||||||
|
|
||||||
Note: For MacOS Monterey or M1 Macs, [you might need to follow these steps](https://stackoverflow.com/a/70880741) to install Scipy.
|
Note: For MacOS Monterey or M1 Macs, [you might need to follow these steps](https://stackoverflow.com/a/70880741) to install Scipy.
|
||||||
|
|
||||||
|
@ -229,10 +239,71 @@ If you want to run tile generation, please install TippeCanoe [following these i
|
||||||
|
|
||||||
- Start a terminal
|
- Start a terminal
|
||||||
- Change to this directory (`/data/data-pipeline/`)
|
- Change to this directory (`/data/data-pipeline/`)
|
||||||
- Make sure you have at least Python 3.7 installed: `python -V` or `python3 -V`
|
- Make sure you have at least Python 3.8 installed: `python -V` or `python3 -V`
|
||||||
- We use [Poetry](https://python-poetry.org/) for managing dependencies and building the application. Please follow the instructions on their site to download.
|
- We use [Poetry](https://python-poetry.org/) for managing dependencies and building the application. Please follow the instructions on their site to download.
|
||||||
- Install Poetry requirements with `poetry install`
|
- Install Poetry requirements with `poetry install`
|
||||||
|
|
||||||
|
### Running tox
|
||||||
|
|
||||||
|
Our full test and check suite is run using tox. This can be run using commands such
|
||||||
|
as `poetry run tox`.
|
||||||
|
|
||||||
|
Each run can take a while to build the whole environment. If you'd like to save time,
|
||||||
|
you can use the previously built environment by running `poetry run tox -e lint`
|
||||||
|
which will drastically speed up the linting process.
|
||||||
|
|
||||||
|
### Configuring pre-commit hooks
|
||||||
|
|
||||||
|
<!-- markdown-link-check-disable -->
|
||||||
|
To promote consistent code style and quality, we use git pre-commit hooks to
|
||||||
|
automatically lint and reformat our code before every commit we make to the codebase.
|
||||||
|
Pre-commit hooks are defined in the file [`.pre-commit-config.yaml`](../.pre-commit-config.yaml).
|
||||||
|
<!-- markdown-link-check-enable -->
|
||||||
|
|
||||||
|
1. First, install [`pre-commit`](https://pre-commit.com/) globally:
|
||||||
|
|
||||||
|
$ brew install pre-commit
|
||||||
|
|
||||||
|
2. While in the `data/data-pipeline` directory, run `pre-commit install` to install
|
||||||
|
the specific git hooks used in this repository.
|
||||||
|
|
||||||
|
Now, any time you commit code to the repository, the hooks will run on all modified files automatically. If you wish,
|
||||||
|
you can force a re-run on all files with `pre-commit run --all-files`.
|
||||||
|
|
||||||
|
#### Conflicts between backend and frontend git hooks
|
||||||
|
<!-- markdown-link-check-disable -->
|
||||||
|
In the front-end part of the codebase (the `justice40-tool/client` folder), we use
|
||||||
|
`Husky` to run pre-commit hooks for the front-end. This is different than the
|
||||||
|
`pre-commit` framework we use for the backend. The frontend `Husky` hooks are
|
||||||
|
configured at
|
||||||
|
[client/.husky](client/.husky).
|
||||||
|
|
||||||
|
It is not possible to run both our `Husky` hooks and `pre-commit` hooks on every
|
||||||
|
commit; either one or the other will run.
|
||||||
|
|
||||||
|
<!-- markdown-link-check-enable -->
|
||||||
|
|
||||||
|
`Husky` is installed every time you run `npm install`. To use the `Husky` front-end
|
||||||
|
hooks during front-end development, simply run `npm install`.
|
||||||
|
|
||||||
|
However, running `npm install` overwrites the backend hooks setup by `pre-commit`.
|
||||||
|
To restore the backend hooks after running `npm install`, do the following:
|
||||||
|
|
||||||
|
1. Run `pre-commit install` while in the `data/data-pipeline` directory.
|
||||||
|
2. The terminal should respond with an error message such as:
|
||||||
|
```
|
||||||
|
[ERROR] Cowardly refusing to install hooks with `core.hooksPath` set.
|
||||||
|
hint: `git config --unset-all core.hooksPath`
|
||||||
|
```
|
||||||
|
|
||||||
|
This error is caused by having previously run `npm install` which used `Husky` to
|
||||||
|
overwrite the hooks path.
|
||||||
|
|
||||||
|
3. Follow the hint and run `git config --unset-all core.hooksPath`.
|
||||||
|
4. Run `pre-commit install` again.
|
||||||
|
|
||||||
|
Now `pre-commit` and the backend hooks should take precedence.
|
||||||
|
|
||||||
### The Application entrypoint
|
### The Application entrypoint
|
||||||
|
|
||||||
After installing the poetry dependencies, you can see a list of commands with the following steps:
|
After installing the poetry dependencies, you can see a list of commands with the following steps:
|
||||||
|
@ -303,7 +374,11 @@ see [python-markdown docs](https://github.com/ipython-contrib/jupyter_contrib_nb
|
||||||
|
|
||||||
### Background
|
### Background
|
||||||
|
|
||||||
For this project, we make use of [pytest](https://docs.pytest.org/en/latest/) for testing purposes. To run tests, simply run `poetry run pytest` in this directory (i.e., `justice40-tool/data/data-pipeline`).
|
<!-- markdown-link-check-disable -->
|
||||||
|
For this project, we make use of [pytest](https://docs.pytest.org/en/latest/) for testing purposes.
|
||||||
|
<!-- markdown-link-check-enable-->
|
||||||
|
|
||||||
|
To run tests, simply run `poetry run pytest` in this directory (i.e., `justice40-tool/data/data-pipeline`).
|
||||||
|
|
||||||
Test data is configured via [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html).
|
Test data is configured via [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html).
|
||||||
|
|
||||||
|
@ -350,7 +425,8 @@ We have four pickle files that correspond to expected files:
|
||||||
|
|
||||||
To update the pickles, let's go one by one:
|
To update the pickles, let's go one by one:
|
||||||
|
|
||||||
For the `score_transformed_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L58), before the `pdt.assert_frame_equal` and run:
|
For the `score_transformed_expected.pkl`, put a breakpoint on [this line]
|
||||||
|
(https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L62), before the `pdt.assert_frame_equal` and run:
|
||||||
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_transform_score`
|
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_transform_score`
|
||||||
|
|
||||||
Once on the breakpoint, capture the df to a pickle as follows:
|
Once on the breakpoint, capture the df to a pickle as follows:
|
||||||
|
@ -378,7 +454,7 @@ score_data_actual.to_pickle(data_path / "data_pipeline" / "etl" / "score" / "tes
|
||||||
|
|
||||||
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_score_data`
|
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_score_data`
|
||||||
|
|
||||||
For the `tile_data_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L86), before the `pdt.assert_frame_equal` and run:
|
For the `tile_data_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L90), before the `pdt.assert_frame_equal` and run:
|
||||||
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_tile_data`
|
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_tile_data`
|
||||||
|
|
||||||
Once on the breakpoint, capture the df to a pickle as follows:
|
Once on the breakpoint, capture the df to a pickle as follows:
|
||||||
|
@ -418,7 +494,9 @@ In the future, we could adopt any of the below strategies to work around this:
|
||||||
|
|
||||||
1. We could use [pytest-snapshot](https://pypi.org/project/pytest-snapshot/) to automatically store the output of each test as data changes. This would make it so that you could avoid having to generate a pickle for each method - instead, you would only need to call `generate` once , and only when the dataframe had changed.
|
1. We could use [pytest-snapshot](https://pypi.org/project/pytest-snapshot/) to automatically store the output of each test as data changes. This would make it so that you could avoid having to generate a pickle for each method - instead, you would only need to call `generate` once , and only when the dataframe had changed.
|
||||||
|
|
||||||
|
<!-- markdown-link-check-disable -->
|
||||||
Additionally, you could use a pandas type schema annotation such as [pandera](https://pandera.readthedocs.io/en/stable/schema_models.html?highlight=inputschema#basic-usage) to annotate input/output schemas for given functions, and your unit tests could use these to validate explicitly. This could be of very high value for annotating expectations.
|
Additionally, you could use a pandas type schema annotation such as [pandera](https://pandera.readthedocs.io/en/stable/schema_models.html?highlight=inputschema#basic-usage) to annotate input/output schemas for given functions, and your unit tests could use these to validate explicitly. This could be of very high value for annotating expectations.
|
||||||
|
<!-- markdown-link-check-enable-->
|
||||||
|
|
||||||
Alternatively, or in conjunction, you could move toward using a more strictly-typed container format for read/writes such as SQL/SQLite, and use something like [SQLModel](https://github.com/tiangolo/sqlmodel) to handle more explicit type guarantees.
|
Alternatively, or in conjunction, you could move toward using a more strictly-typed container format for read/writes such as SQL/SQLite, and use something like [SQLModel](https://github.com/tiangolo/sqlmodel) to handle more explicit type guarantees.
|
||||||
|
|
||||||
|
@ -440,19 +518,19 @@ In order to update the snapshot fixtures of an ETL class, follow the following s
|
||||||
|
|
||||||
1. If you need to manually update the fixtures, update the "furthest upstream" source
|
1. If you need to manually update the fixtures, update the "furthest upstream" source
|
||||||
that is called by `_setup_etl_instance_and_run_extract`. For instance, this may
|
that is called by `_setup_etl_instance_and_run_extract`. For instance, this may
|
||||||
involve creating a new zip file that imitates the source data. (e.g., for the
|
involve creating a new zip file that imitates the source data. (e.g., for the
|
||||||
National Risk Index test, update
|
National Risk Index test, update
|
||||||
`data_pipeline/tests/sources/national_risk_index/data/NRI_Table_CensusTracts.zip`
|
`data_pipeline/tests/sources/national_risk_index/data/NRI_Table_CensusTracts.zip`
|
||||||
which is a 64kb imitation of the 405MB source NRI data.)
|
which is a 64kb imitation of the 405MB source NRI data.)
|
||||||
2. Run `pytest . -rsx --update_snapshots` to update snapshots for all files, or you
|
2. Run `pytest . -rsx --update_snapshots` to update snapshots for all files, or you
|
||||||
can pass a specific file name to pytest to be more precise (e.g., `pytest
|
can pass a specific file name to pytest to be more precise (e.g., `pytest
|
||||||
data_pipeline/tests/sources/national_risk_index/test_etl.py -rsx --update_snapshots`)
|
data_pipeline/tests/sources/national_risk_index/test_etl.py -rsx --update_snapshots`)
|
||||||
3. Re-run pytest without the `update_snapshots` flag (e.g., `pytest . -rsx`) to
|
3. Re-run pytest without the `update_snapshots` flag (e.g., `pytest . -rsx`) to
|
||||||
ensure the tests now pass.
|
ensure the tests now pass.
|
||||||
4. Carefully check the `git diff` for the updates to all test fixtures to make sure
|
4. Carefully check the `git diff` for the updates to all test fixtures to make sure
|
||||||
these are as expected. This part is very important. For instance, if you changed a
|
these are as expected. This part is very important. For instance, if you changed a
|
||||||
column name, you would only expect the column name to change in the output. If
|
column name, you would only expect the column name to change in the output. If
|
||||||
you modified the calculation of some data, spot check the results to see if the
|
you modified the calculation of some data, spot check the results to see if the
|
||||||
numbers in the updated fixtures are as expected.
|
numbers in the updated fixtures are as expected.
|
||||||
|
|
||||||
### Other ETL Unit Tests
|
### Other ETL Unit Tests
|
||||||
|
@ -485,3 +563,13 @@ See above [Fixtures](#configuration--fixtures) section for information about whe
|
||||||
These make use of [tmp_path_factory](https://docs.pytest.org/en/latest/how-to/tmp_path.html) to create a file-system located under `temp_dir`, and validate whether the correct files are written to the correct locations.
|
These make use of [tmp_path_factory](https://docs.pytest.org/en/latest/how-to/tmp_path.html) to create a file-system located under `temp_dir`, and validate whether the correct files are written to the correct locations.
|
||||||
|
|
||||||
Additional future modifications could include the use of Pandera and/or other schema validation tools, and or a more explicit test that the data written to file can be read back in and yield the same dataframe.
|
Additional future modifications could include the use of Pandera and/or other schema validation tools, and or a more explicit test that the data written to file can be read back in and yield the same dataframe.
|
||||||
|
|
||||||
|
### Smoketests
|
||||||
|
|
||||||
|
To ensure the score and tiles process correctly, there is a suite of "smoke tests" that can be run after the ETL and score data have been run, and outputs like the frontend GEOJSON have been created.
|
||||||
|
These tests are implemented as pytest test, but are skipped by default. To run them.
|
||||||
|
|
||||||
|
1. Generate a full score with `poetry run python3 data_pipeline/application.py score-full-run`
|
||||||
|
2. Generate the tile data with `poetry run python3 data_pipeline/application.py generate-score-post`
|
||||||
|
3. Generate the frontend GEOJSON with `poetry run python3 data_pipeline/application.py geo-score`
|
||||||
|
4. Select the smoke tests for pytest with `poetry run pytest data_pipeline/tests -k smoketest`
|
||||||
|
|
|
@ -1,30 +1,27 @@
|
||||||
from subprocess import call
|
|
||||||
import sys
|
import sys
|
||||||
import click
|
from subprocess import call
|
||||||
|
|
||||||
|
import click
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.runner import (
|
from data_pipeline.etl.runner import etl_runner
|
||||||
etl_runner,
|
from data_pipeline.etl.runner import score_generate
|
||||||
score_generate,
|
from data_pipeline.etl.runner import score_geo
|
||||||
score_geo,
|
from data_pipeline.etl.runner import score_post
|
||||||
score_post,
|
from data_pipeline.etl.sources.census.etl_utils import check_census_data_source
|
||||||
)
|
|
||||||
from data_pipeline.etl.sources.census.etl_utils import (
|
from data_pipeline.etl.sources.census.etl_utils import (
|
||||||
reset_data_directories as census_reset,
|
reset_data_directories as census_reset,
|
||||||
zip_census_data,
|
|
||||||
)
|
)
|
||||||
|
from data_pipeline.etl.sources.census.etl_utils import zip_census_data
|
||||||
from data_pipeline.etl.sources.tribal.etl_utils import (
|
from data_pipeline.etl.sources.tribal.etl_utils import (
|
||||||
reset_data_directories as tribal_reset,
|
reset_data_directories as tribal_reset,
|
||||||
)
|
)
|
||||||
from data_pipeline.tile.generate import generate_tiles
|
from data_pipeline.tile.generate import generate_tiles
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.utils import check_first_run
|
||||||
data_folder_cleanup,
|
from data_pipeline.utils import data_folder_cleanup
|
||||||
get_module_logger,
|
from data_pipeline.utils import downloadable_cleanup
|
||||||
score_folder_cleanup,
|
from data_pipeline.utils import get_module_logger
|
||||||
downloadable_cleanup,
|
from data_pipeline.utils import score_folder_cleanup
|
||||||
temp_folder_cleanup,
|
from data_pipeline.utils import temp_folder_cleanup
|
||||||
check_first_run,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -35,8 +32,6 @@ dataset_cli_help = "Grab the data from either 'local' for local access or 'aws'
|
||||||
def cli():
|
def cli():
|
||||||
"""Defines a click group for the commands below"""
|
"""Defines a click group for the commands below"""
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@cli.command(help="Clean up all census data folders")
|
@cli.command(help="Clean up all census data folders")
|
||||||
def census_cleanup():
|
def census_cleanup():
|
||||||
|
@ -96,6 +91,23 @@ def census_data_download(zip_compress):
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
|
||||||
|
|
||||||
|
@cli.command(help="Retrieve census data from source")
|
||||||
|
@click.option(
|
||||||
|
"-s",
|
||||||
|
"--data-source",
|
||||||
|
default="local",
|
||||||
|
required=False,
|
||||||
|
type=str,
|
||||||
|
help=dataset_cli_help,
|
||||||
|
)
|
||||||
|
def pull_census_data(data_source: str):
|
||||||
|
logger.info("Pulling census data from %s", data_source)
|
||||||
|
data_path = settings.APP_ROOT / "data" / "census"
|
||||||
|
check_census_data_source(data_path, data_source)
|
||||||
|
logger.info("Finished pulling census data")
|
||||||
|
sys.exit()
|
||||||
|
|
||||||
|
|
||||||
@cli.command(
|
@cli.command(
|
||||||
help="Run all ETL processes or a specific one",
|
help="Run all ETL processes or a specific one",
|
||||||
)
|
)
|
||||||
|
|
|
@ -2,13 +2,13 @@
|
||||||
|
|
||||||
## Comparison tool
|
## Comparison tool
|
||||||
|
|
||||||
TODO once the comparison tool has been refactored.
|
TODO once the comparison tool has been refactored.
|
||||||
|
|
||||||
## Single comparator score comparisons
|
## Single comparator score comparisons
|
||||||
|
|
||||||
The goal of this directory is to create interactive 1-to-1 dac list:cejst comparisons. That means that, when this tool is run, you will have comparisons of two true/false classifications.
|
The goal of this directory is to create interactive 1-to-1 dac list:cejst comparisons. That means that, when this tool is run, you will have comparisons of two true/false classifications.
|
||||||
|
|
||||||
This uses `papermill` to parameterize a jupyter notebook, and is meant to be a *lightweight* entry into this analysis. The tool as a whole creates a bunch of comparisons against CEJST data -- but after it runs, you'll have the notebook to re-run and add to if you are so inclined.
|
This uses `papermill` to parameterize a jupyter notebook, and is meant to be a *lightweight* entry into this analysis. The tool as a whole creates a bunch of comparisons against CEJST data -- but after it runs, you'll have the notebook to re-run and add to if you are so inclined.
|
||||||
|
|
||||||
To run:
|
To run:
|
||||||
` $ python src/run_tract_comparison.py --template_notebook=TEMPLATE.ipynb --parameter_yaml=PARAMETERS.yaml`
|
` $ python src/run_tract_comparison.py --template_notebook=TEMPLATE.ipynb --parameter_yaml=PARAMETERS.yaml`
|
||||||
|
@ -19,52 +19,52 @@ For example, if I am running this from the `comparison_tool` directory within th
|
||||||
|
|
||||||
__What is the template notebook?__
|
__What is the template notebook?__
|
||||||
|
|
||||||
This gets filled in by the parameters in the yaml file and then executed. Even after execution, it is run-able and interactive. You do not need to change anything in this (with the caveat -- depending on how you run `jupyter lab`, you might need to add `import sys` and then `sys.path.append("../../../../)` to run the notebook live).
|
This gets filled in by the parameters in the yaml file and then executed. Even after execution, it is run-able and interactive. You do not need to change anything in this (with the caveat -- depending on how you run `jupyter lab`, you might need to add `import sys` and then `sys.path.append("../../../../)` to run the notebook live).
|
||||||
|
|
||||||
__What is the output?__
|
__What is the output?__
|
||||||
|
|
||||||
When you run this, you'll get back three files:
|
When you run this, you'll get back three files:
|
||||||
1. The filled-in parameter notebook that you can run live, with the date appended. This means if you run the script twice in one day, the notebook will get overriden, but if you run the script on two consecutive days, you will get two separate notebooks saved.
|
1. The filled-in parameter notebook that you can run live, with the date appended. This means if you run the script twice in one day, the notebook will get overriden, but if you run the script on two consecutive days, you will get two separate notebooks saved.
|
||||||
2. A graph that shows the relative average of the specified `ADDITIONAL_DEMO_COLUMNS` and `DEMOGRAPHIC_COLUMNS` segmented by CEJST and the comparator you include. This gets overridden with every run.
|
2. A graph that shows the relative average of the specified `ADDITIONAL_DEMO_COLUMNS` and `DEMOGRAPHIC_COLUMNS` segmented by CEJST and the comparator you include. This gets overridden with every run.
|
||||||
3. An excel file with many tabs that has summary statistics from the comparison of the two classifications (the cejst and the comparator).
|
3. An excel file with many tabs that has summary statistics from the comparison of the two classifications (the cejst and the comparator).
|
||||||
|
|
||||||
In more detail, the excel file contains the following tabs:
|
In more detail, the excel file contains the following tabs:
|
||||||
- `Summary`: out of all tracts (even if you keep missing), how many tracts are classified TRUE/FALSE by the comparator and CEJST, by population and number.
|
- `Summary`: out of all tracts (even if you keep missing), how many tracts are classified TRUE/FALSE by the comparator and CEJST, by population and number.
|
||||||
- `Tract level stats`: overall, for all tracts classified as TRUE for CEJST and the comparator, how do the demographics of those tracts compare? Here, we think of "demographics" loosely -- whatever columns you include in the parameter yaml will show up. For example, if my additional demographics column in the yaml included `percent of households in linguistic isolation`, I'd see the average percent of households in linguistic isolation for the comparator-identified tracts (where the comparator is TRUE) and for CEJST-identified tracts.
|
- `Tract level stats`: overall, for all tracts classified as TRUE for CEJST and the comparator, how do the demographics of those tracts compare? Here, we think of "demographics" loosely -- whatever columns you include in the parameter yaml will show up. For example, if my additional demographics column in the yaml included `percent of households in linguistic isolation`, I'd see the average percent of households in linguistic isolation for the comparator-identified tracts (where the comparator is TRUE) and for CEJST-identified tracts.
|
||||||
- `Population level stats`: same demographic variables, looking at population within tract. Since not all tracts have the same number of people, this will be slightly different. This also includes segments of the population -- where you can investigate the disjoint set of tracts identified by a single method (e.g., you could specifically look at tracts identified by CEJST but not by the comparator.)
|
- `Population level stats`: same demographic variables, looking at population within tract. Since not all tracts have the same number of people, this will be slightly different. This also includes segments of the population -- where you can investigate the disjoint set of tracts identified by a single method (e.g., you could specifically look at tracts identified by CEJST but not by the comparator.)
|
||||||
- `Segmented tract level stats`: segmented version of the tract-level stats.
|
- `Segmented tract level stats`: segmented version of the tract-level stats.
|
||||||
- (Optional -- requires not disjoint set of tracts) `Comparator and CEJST overlap`: shows the overlap from the vantage point of the comparator ("what share of the tracts that the comparator identifies are also identified in CEJST?"). Also lists the states the comparator has information for.
|
- (Optional -- requires not disjoint set of tracts) `Comparator and CEJST overlap`: shows the overlap from the vantage point of the comparator ("what share of the tracts that the comparator identifies are also identified in CEJST?"). Also lists the states the comparator has information for.
|
||||||
|
|
||||||
__What parameters go in the yaml file?__
|
__What parameters go in the yaml file?__
|
||||||
|
|
||||||
- ADDITIONAL_DEMO_COLUMNS: list, demographic columns from the score file that you want to run analyses on. All columns here will appear in the excel file and the graph.
|
- ADDITIONAL_DEMO_COLUMNS: list, demographic columns from the score file that you want to run analyses on. All columns here will appear in the excel file and the graph.
|
||||||
- COMPARATOR_COLUMN: the name of the column that has a boolean (*must be TRUE / FALSE*) for whether or not the tract is prioritized. You provide this!
|
- COMPARATOR_COLUMN: the name of the column that has a boolean (*must be TRUE / FALSE*) for whether or not the tract is prioritized. You provide this!
|
||||||
- DEMOGRAPHIC_COLUMNS: list, demographic columns from another file that you'd like to include in the analysis.
|
- DEMOGRAPHIC_COLUMNS: list, demographic columns from another file that you'd like to include in the analysis.
|
||||||
- DEMOGRAPHIC_FILE: the file that has the census demographic information. This name suggests, in theory, that you've run our pipeline and are using the ACS output -- but any file with `GEOID10_TRACT` as the field with census tract IDs will work.
|
- DEMOGRAPHIC_FILE: the file that has the census demographic information. This name suggests, in theory, that you've run our pipeline and are using the ACS output -- but any file with `GEOID10_TRACT` as the field with census tract IDs will work.
|
||||||
- OUTPUT_DATA_PATH: where you want the output to be. Convention: output + folder named of data source. Note that the folder name of the data source gets read as the "data name" for some of the outputs.
|
- OUTPUT_DATA_PATH: where you want the output to be. Convention: output + folder named of data source. Note that the folder name of the data source gets read as the "data name" for some of the outputs.
|
||||||
- SCORE_COLUMN: CEJST score boolean name column name.
|
- SCORE_COLUMN: CEJST score boolean name column name.
|
||||||
- SCORE_FILE: CEJST full score file. This requires that you've run our pipeline, but in theory, the downloaded file should also work, provided the columns are named appropriately.
|
- SCORE_FILE: CEJST full score file. This requires that you've run our pipeline, but in theory, the downloaded file should also work, provided the columns are named appropriately.
|
||||||
- TOTAL_POPULATION_COLUMN: column name for total population. We use `Total Population` currently in our pipeline.
|
- TOTAL_POPULATION_COLUMN: column name for total population. We use `Total Population` currently in our pipeline.
|
||||||
- OTHER_COMPARATOR_COLUMNS: list, other columns from the comparator file you might want to read in for analysis. This is an optional argument. You will keep these columns to perform analysis once you have the notebook -- this will not be included in the excel print out.
|
- OTHER_COMPARATOR_COLUMNS: list, other columns from the comparator file you might want to read in for analysis. This is an optional argument. You will keep these columns to perform analysis once you have the notebook -- this will not be included in the excel print out.
|
||||||
- KEEP_MISSING_VALUES_FOR_SEGMENTATION: whether or not to fill NaNs. True keeps missing.
|
- KEEP_MISSING_VALUES_FOR_SEGMENTATION: whether or not to fill NaNs. True keeps missing.
|
||||||
|
|
||||||
__Cleaning data__
|
__Cleaning data__
|
||||||
|
|
||||||
Comparator data should live in a flat csv, just like the CEJST data. Right now, each comparator has a folder in `comparison_tool/data` that contains a notebook to clean the data (this is because the data is often quirky and so live inspection is easier), the `raw` data, and the `clean` data. We can also point the `yaml` to an `ETL` output, for files in which there are multiple important columns, if you want to use one of the data sources the CEJST team has already included in the pipeline (which are already compatible with the tool).
|
Comparator data should live in a flat csv, just like the CEJST data. Right now, each comparator has a folder in `comparison_tool/data` that contains a notebook to clean the data (this is because the data is often quirky and so live inspection is easier), the `raw` data, and the `clean` data. We can also point the `yaml` to an `ETL` output, for files in which there are multiple important columns, if you want to use one of the data sources the CEJST team has already included in the pipeline (which are already compatible with the tool).
|
||||||
|
|
||||||
When you make your own output for comparison, make sure to follow the steps below.
|
When you make your own output for comparison, make sure to follow the steps below.
|
||||||
|
|
||||||
When you clean the data, it's important that you:
|
When you clean the data, it's important that you:
|
||||||
1. Ensure the tract level id is named the same as the field name in score M (specified in `field_names`). Right now, this is `GEOID10_TRACT`.
|
1. Ensure the tract level id is named the same as the field name in score M (specified in `field_names`). Right now, this is `GEOID10_TRACT`.
|
||||||
2. Ensure the identification column is a `bool`.
|
2. Ensure the identification column is a `bool`.
|
||||||
|
|
||||||
You will provide the path to the comparator data in the parameter yaml file.
|
You will provide the path to the comparator data in the parameter yaml file.
|
||||||
|
|
||||||
__How to use the shell script__
|
__How to use the shell script__
|
||||||
|
|
||||||
We have also included a shell script, `run_all_comparisons.sh`. This script includes all
|
We have also included a shell script, `run_all_comparisons.sh`. This script includes all
|
||||||
of the commands that we have run to generate pairwise comparisons.
|
of the commands that we have run to generate pairwise comparisons.
|
||||||
|
|
||||||
To run: `$ bash run_all_comparisons.sh`
|
To run: `$ bash run_all_comparisons.sh`
|
||||||
|
|
||||||
To add to it: create a new line and include the command line for each notebook run.
|
To add to it: create a new line and include the command line for each notebook run.
|
||||||
|
|
Binary file not shown.
|
@ -1,3 +1,3 @@
|
||||||
#! /bin/bash
|
#! /bin/bash
|
||||||
|
|
||||||
poetry run python3 src/run_tract_comparison.py --template_notebook=src/tract_comparison__template.ipynb --parameter_yaml=src/donut_hole_dacs.yaml
|
poetry run python3 src/run_tract_comparison.py --template_notebook=src/tract_comparison__template.ipynb --parameter_yaml=src/donut_hole_dacs.yaml
|
||||||
|
|
|
@ -17,7 +17,7 @@ DEMOGRAPHIC_COLUMNS:
|
||||||
DEMOGRAPHIC_FILE: ../../data_pipeline/data/dataset/census_acs_2019/usa.csv
|
DEMOGRAPHIC_FILE: ../../data_pipeline/data/dataset/census_acs_2019/usa.csv
|
||||||
OUTPUT_DATA_PATH: output/donut_hole_dac
|
OUTPUT_DATA_PATH: output/donut_hole_dac
|
||||||
SCORE_FILE: ../../data_pipeline/data/score/csv/full/usa.csv
|
SCORE_FILE: ../../data_pipeline/data/score/csv/full/usa.csv
|
||||||
OTHER_COMPARATOR_COLUMNS:
|
OTHER_COMPARATOR_COLUMNS:
|
||||||
- donut_hole_dac
|
- donut_hole_dac
|
||||||
- P200_PFS
|
- P200_PFS
|
||||||
- HSEF
|
- HSEF
|
||||||
|
|
|
@ -12,12 +12,12 @@ To see more: https://buildmedia.readthedocs.org/media/pdf/papermill/latest/paper
|
||||||
To run:
|
To run:
|
||||||
` $ python src/run_tract_comparison.py --template_notebook=TEMPLATE.ipynb --parameter_yaml=PARAMETERS.yaml`
|
` $ python src/run_tract_comparison.py --template_notebook=TEMPLATE.ipynb --parameter_yaml=PARAMETERS.yaml`
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
|
||||||
import datetime
|
|
||||||
import argparse
|
import argparse
|
||||||
import yaml
|
import datetime
|
||||||
|
import os
|
||||||
|
|
||||||
import papermill as pm
|
import papermill as pm
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
def _read_param_file(param_file: str) -> dict:
|
def _read_param_file(param_file: str) -> dict:
|
||||||
|
|
|
@ -16,7 +16,7 @@
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"\n",
|
"\n",
|
||||||
"from data_pipeline.score import field_names\n",
|
"from data_pipeline.score import field_names\n",
|
||||||
"from data_pipeline.comparison_tool.src import utils \n",
|
"from data_pipeline.comparison_tool.src import utils\n",
|
||||||
"\n",
|
"\n",
|
||||||
"pd.options.display.float_format = \"{:,.3f}\".format\n",
|
"pd.options.display.float_format = \"{:,.3f}\".format\n",
|
||||||
"%load_ext lab_black"
|
"%load_ext lab_black"
|
||||||
|
@ -128,9 +128,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"utils.validate_new_data(\n",
|
"utils.validate_new_data(file_path=COMPARATOR_FILE, score_col=COMPARATOR_COLUMN)"
|
||||||
" file_path=COMPARATOR_FILE, score_col=COMPARATOR_COLUMN\n",
|
|
||||||
")"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -148,20 +146,25 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"comparator_cols = [COMPARATOR_COLUMN] + OTHER_COMPARATOR_COLUMNS if OTHER_COMPARATOR_COLUMNS else [COMPARATOR_COLUMN]\n",
|
"comparator_cols = (\n",
|
||||||
|
" [COMPARATOR_COLUMN] + OTHER_COMPARATOR_COLUMNS\n",
|
||||||
|
" if OTHER_COMPARATOR_COLUMNS\n",
|
||||||
|
" else [COMPARATOR_COLUMN]\n",
|
||||||
|
")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"#papermill_description=Loading_data\n",
|
"# papermill_description=Loading_data\n",
|
||||||
"joined_df = pd.concat(\n",
|
"joined_df = pd.concat(\n",
|
||||||
" [\n",
|
" [\n",
|
||||||
" utils.read_file(\n",
|
" utils.read_file(\n",
|
||||||
" file_path=SCORE_FILE,\n",
|
" file_path=SCORE_FILE,\n",
|
||||||
" columns=[TOTAL_POPULATION_COLUMN, SCORE_COLUMN] + ADDITIONAL_DEMO_COLUMNS,\n",
|
" columns=[TOTAL_POPULATION_COLUMN, SCORE_COLUMN]\n",
|
||||||
|
" + ADDITIONAL_DEMO_COLUMNS,\n",
|
||||||
" geoid=GEOID_COLUMN,\n",
|
" geoid=GEOID_COLUMN,\n",
|
||||||
" ),\n",
|
" ),\n",
|
||||||
" utils.read_file(\n",
|
" utils.read_file(\n",
|
||||||
" file_path=COMPARATOR_FILE,\n",
|
" file_path=COMPARATOR_FILE,\n",
|
||||||
" columns=comparator_cols,\n",
|
" columns=comparator_cols,\n",
|
||||||
" geoid=GEOID_COLUMN\n",
|
" geoid=GEOID_COLUMN,\n",
|
||||||
" ),\n",
|
" ),\n",
|
||||||
" utils.read_file(\n",
|
" utils.read_file(\n",
|
||||||
" file_path=DEMOGRAPHIC_FILE,\n",
|
" file_path=DEMOGRAPHIC_FILE,\n",
|
||||||
|
@ -196,13 +199,13 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#papermill_description=Summary_stats\n",
|
"# papermill_description=Summary_stats\n",
|
||||||
"population_df = utils.produce_summary_stats(\n",
|
"population_df = utils.produce_summary_stats(\n",
|
||||||
" joined_df=joined_df,\n",
|
" joined_df=joined_df,\n",
|
||||||
" comparator_column=COMPARATOR_COLUMN,\n",
|
" comparator_column=COMPARATOR_COLUMN,\n",
|
||||||
" score_column=SCORE_COLUMN,\n",
|
" score_column=SCORE_COLUMN,\n",
|
||||||
" population_column=TOTAL_POPULATION_COLUMN,\n",
|
" population_column=TOTAL_POPULATION_COLUMN,\n",
|
||||||
" geoid_column=GEOID_COLUMN\n",
|
" geoid_column=GEOID_COLUMN,\n",
|
||||||
")\n",
|
")\n",
|
||||||
"population_df"
|
"population_df"
|
||||||
]
|
]
|
||||||
|
@ -224,18 +227,18 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#papermill_description=Tract_stats\n",
|
"# papermill_description=Tract_stats\n",
|
||||||
"tract_level_by_identification_df = pd.concat(\n",
|
"tract_level_by_identification_df = pd.concat(\n",
|
||||||
" [\n",
|
" [\n",
|
||||||
" utils.get_demo_series(\n",
|
" utils.get_demo_series(\n",
|
||||||
" grouping_column=COMPARATOR_COLUMN,\n",
|
" grouping_column=COMPARATOR_COLUMN,\n",
|
||||||
" joined_df=joined_df,\n",
|
" joined_df=joined_df,\n",
|
||||||
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS\n",
|
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS,\n",
|
||||||
" ),\n",
|
" ),\n",
|
||||||
" utils.get_demo_series(\n",
|
" utils.get_demo_series(\n",
|
||||||
" grouping_column=SCORE_COLUMN,\n",
|
" grouping_column=SCORE_COLUMN,\n",
|
||||||
" joined_df=joined_df,\n",
|
" joined_df=joined_df,\n",
|
||||||
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS\n",
|
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS,\n",
|
||||||
" ),\n",
|
" ),\n",
|
||||||
" ],\n",
|
" ],\n",
|
||||||
" axis=1,\n",
|
" axis=1,\n",
|
||||||
|
@ -256,17 +259,25 @@
|
||||||
" y=\"Variable\",\n",
|
" y=\"Variable\",\n",
|
||||||
" x=\"Avg in tracts\",\n",
|
" x=\"Avg in tracts\",\n",
|
||||||
" hue=\"Definition\",\n",
|
" hue=\"Definition\",\n",
|
||||||
" data=tract_level_by_identification_df.sort_values(by=COMPARATOR_COLUMN, ascending=False)\n",
|
" data=tract_level_by_identification_df.sort_values(\n",
|
||||||
|
" by=COMPARATOR_COLUMN, ascending=False\n",
|
||||||
|
" )\n",
|
||||||
" .stack()\n",
|
" .stack()\n",
|
||||||
" .reset_index()\n",
|
" .reset_index()\n",
|
||||||
" .rename(\n",
|
" .rename(\n",
|
||||||
" columns={\"level_0\": \"Variable\", \"level_1\": \"Definition\", 0: \"Avg in tracts\"}\n",
|
" columns={\n",
|
||||||
|
" \"level_0\": \"Variable\",\n",
|
||||||
|
" \"level_1\": \"Definition\",\n",
|
||||||
|
" 0: \"Avg in tracts\",\n",
|
||||||
|
" }\n",
|
||||||
" ),\n",
|
" ),\n",
|
||||||
" palette=\"Blues\",\n",
|
" palette=\"Blues\",\n",
|
||||||
")\n",
|
")\n",
|
||||||
"plt.xlim(0, 1)\n",
|
"plt.xlim(0, 1)\n",
|
||||||
"plt.title(\"Tract level averages by identification strategy\")\n",
|
"plt.title(\"Tract level averages by identification strategy\")\n",
|
||||||
"plt.savefig(os.path.join(OUTPUT_DATA_PATH, \"tract_lvl_avg.jpg\"), bbox_inches='tight')"
|
"plt.savefig(\n",
|
||||||
|
" os.path.join(OUTPUT_DATA_PATH, \"tract_lvl_avg.jpg\"), bbox_inches=\"tight\"\n",
|
||||||
|
")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -276,13 +287,13 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#papermill_description=Tract_stats_grouped\n",
|
"# papermill_description=Tract_stats_grouped\n",
|
||||||
"tract_level_by_grouping_df = utils.get_tract_level_grouping(\n",
|
"tract_level_by_grouping_df = utils.get_tract_level_grouping(\n",
|
||||||
" joined_df=joined_df,\n",
|
" joined_df=joined_df,\n",
|
||||||
" score_column=SCORE_COLUMN,\n",
|
" score_column=SCORE_COLUMN,\n",
|
||||||
" comparator_column=COMPARATOR_COLUMN,\n",
|
" comparator_column=COMPARATOR_COLUMN,\n",
|
||||||
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS,\n",
|
" demo_columns=ADDITIONAL_DEMO_COLUMNS + DEMOGRAPHIC_COLUMNS,\n",
|
||||||
" keep_missing_values=KEEP_MISSING_VALUES_FOR_SEGMENTATION\n",
|
" keep_missing_values=KEEP_MISSING_VALUES_FOR_SEGMENTATION,\n",
|
||||||
")\n",
|
")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"tract_level_by_grouping_formatted_df = utils.format_multi_index_for_excel(\n",
|
"tract_level_by_grouping_formatted_df = utils.format_multi_index_for_excel(\n",
|
||||||
|
@ -315,7 +326,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#papermill_description=Population_stats\n",
|
"# papermill_description=Population_stats\n",
|
||||||
"population_weighted_stats_df = pd.concat(\n",
|
"population_weighted_stats_df = pd.concat(\n",
|
||||||
" [\n",
|
" [\n",
|
||||||
" utils.construct_weighted_statistics(\n",
|
" utils.construct_weighted_statistics(\n",
|
||||||
|
@ -363,7 +374,7 @@
|
||||||
"comparator_and_cejst_proportion_series, states = utils.get_final_summary_info(\n",
|
"comparator_and_cejst_proportion_series, states = utils.get_final_summary_info(\n",
|
||||||
" population=population_df,\n",
|
" population=population_df,\n",
|
||||||
" comparator_file=COMPARATOR_FILE,\n",
|
" comparator_file=COMPARATOR_FILE,\n",
|
||||||
" geoid_col=GEOID_COLUMN\n",
|
" geoid_col=GEOID_COLUMN,\n",
|
||||||
")"
|
")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
@ -393,7 +404,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#papermill_description=Writing_excel\n",
|
"# papermill_description=Writing_excel\n",
|
||||||
"utils.write_single_comparison_excel(\n",
|
"utils.write_single_comparison_excel(\n",
|
||||||
" output_excel=OUTPUT_EXCEL,\n",
|
" output_excel=OUTPUT_EXCEL,\n",
|
||||||
" population_df=population_df,\n",
|
" population_df=population_df,\n",
|
||||||
|
@ -401,7 +412,7 @@
|
||||||
" population_weighted_stats_df=population_weighted_stats_df,\n",
|
" population_weighted_stats_df=population_weighted_stats_df,\n",
|
||||||
" tract_level_by_grouping_formatted_df=tract_level_by_grouping_formatted_df,\n",
|
" tract_level_by_grouping_formatted_df=tract_level_by_grouping_formatted_df,\n",
|
||||||
" comparator_and_cejst_proportion_series=comparator_and_cejst_proportion_series,\n",
|
" comparator_and_cejst_proportion_series=comparator_and_cejst_proportion_series,\n",
|
||||||
" states_text=states_text\n",
|
" states_text=states_text,\n",
|
||||||
")"
|
")"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
import pathlib
|
import pathlib
|
||||||
|
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
import xlsxwriter
|
import xlsxwriter
|
||||||
|
|
||||||
from data_pipeline.score import field_names
|
|
||||||
from data_pipeline.etl.sources.census.etl_utils import get_state_information
|
from data_pipeline.etl.sources.census.etl_utils import get_state_information
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
|
||||||
# Some excel parameters
|
# Some excel parameters
|
||||||
DEFAULT_COLUMN_WIDTH = 18
|
DEFAULT_COLUMN_WIDTH = 18
|
||||||
|
@ -40,7 +40,7 @@ def validate_new_data(
|
||||||
assert (
|
assert (
|
||||||
checking_df[score_col].nunique() <= 3
|
checking_df[score_col].nunique() <= 3
|
||||||
), f"Error: there are too many values possible in {score_col}"
|
), f"Error: there are too many values possible in {score_col}"
|
||||||
assert (True in checking_df[score_col].unique()) & (
|
assert (True in checking_df[score_col].unique()) | (
|
||||||
False in checking_df[score_col].unique()
|
False in checking_df[score_col].unique()
|
||||||
), f"Error: {score_col} should be a boolean"
|
), f"Error: {score_col} should be a boolean"
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,7 @@
|
||||||
import pathlib
|
import pathlib
|
||||||
|
|
||||||
from dynaconf import Dynaconf
|
|
||||||
|
|
||||||
import data_pipeline
|
import data_pipeline
|
||||||
|
from dynaconf import Dynaconf
|
||||||
|
|
||||||
settings = Dynaconf(
|
settings = Dynaconf(
|
||||||
envvar_prefix="DYNACONF",
|
envvar_prefix="DYNACONF",
|
||||||
|
@ -12,7 +11,8 @@ settings = Dynaconf(
|
||||||
|
|
||||||
# set root dir
|
# set root dir
|
||||||
settings.APP_ROOT = pathlib.Path(data_pipeline.__file__).resolve().parent
|
settings.APP_ROOT = pathlib.Path(data_pipeline.__file__).resolve().parent
|
||||||
|
settings.DATA_PATH = settings.APP_ROOT / "data"
|
||||||
|
settings.REQUESTS_DEFAULT_TIMOUT = 3600
|
||||||
# To set an environment use:
|
# To set an environment use:
|
||||||
# Linux/OSX: export ENV_FOR_DYNACONF=staging
|
# Linux/OSX: export ENV_FOR_DYNACONF=staging
|
||||||
# Windows: set ENV_FOR_DYNACONF=staging
|
# Windows: set ENV_FOR_DYNACONF=staging
|
||||||
|
|
|
@ -1,256 +1,406 @@
|
||||||
---
|
---
|
||||||
global_config:
|
global_config:
|
||||||
sort_by_label: Census tract ID
|
sort_by_label: Census tract 2010 ID
|
||||||
rounding_num:
|
rounding_num:
|
||||||
float: 2
|
float: 2
|
||||||
loss_rate_percentage: 4
|
loss_rate_percentage: 4
|
||||||
fields:
|
fields:
|
||||||
- score_name: GEOID10_TRACT
|
- score_name: GEOID10_TRACT
|
||||||
label: Census tract ID
|
label: Census tract 2010 ID
|
||||||
format: string
|
format: string
|
||||||
- score_name: County Name
|
- score_name: County Name
|
||||||
label: County Name
|
label: County Name
|
||||||
format: string
|
format: string
|
||||||
- score_name: State/Territory
|
- score_name: State/Territory
|
||||||
label: State/Territory
|
label: State/Territory
|
||||||
format: string
|
format: string
|
||||||
- score_name: Total threshold criteria exceeded
|
- score_name: Percent Black or African American
|
||||||
label: Total threshold criteria exceeded
|
label: Percent Black or African American alone
|
||||||
format: int64
|
format: float
|
||||||
- score_name: Total categories exceeded
|
- score_name: Percent American Indian / Alaska Native
|
||||||
label: Total categories exceeded
|
label: Percent American Indian / Alaska Native
|
||||||
format: int64
|
format: float
|
||||||
- score_name: Definition M (communities)
|
- score_name: Percent Asian
|
||||||
label: Identified as disadvantaged
|
label: Percent Asian
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Total population
|
- score_name: Percent Native Hawaiian or Pacific
|
||||||
label: Total population
|
label: Percent Native Hawaiian or Pacific
|
||||||
format: float
|
format: float
|
||||||
- score_name: Is low income and has a low percent of higher ed students?
|
- score_name: Percent two or more races
|
||||||
label: Is low income and high percent of residents that are not higher ed students?
|
label: Percent two or more races
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected agriculture loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Percent White
|
||||||
label: Greater than or equal to the 90th percentile for expected agriculture loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Percent White
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Percent Hispanic or Latino
|
||||||
label: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
label: Percent Hispanic or Latino
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index)
|
- score_name: Percent other races
|
||||||
label: Expected agricultural loss rate (Natural Hazards Risk Index)
|
label: Percent other races
|
||||||
format: loss_rate_percentage
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected building loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Percent age under 10
|
||||||
label: Greater than or equal to the 90th percentile for expected building loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Percent age under 10
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Percent age 10 to 64
|
||||||
label: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
label: Percent age 10 to 64
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Expected building loss rate (Natural Hazards Risk Index)
|
- score_name: Percent age over 64
|
||||||
label: Expected building loss rate (Natural Hazards Risk Index)
|
label: Percent age over 64
|
||||||
format: loss_rate_percentage
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected population loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Total threshold criteria exceeded
|
||||||
label: Greater than or equal to the 90th percentile for expected population loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Total threshold criteria exceeded
|
||||||
format: bool
|
format: int64
|
||||||
- score_name: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Total categories exceeded
|
||||||
label: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
label: Total categories exceeded
|
||||||
format: percentage
|
format: int64
|
||||||
- score_name: Expected population loss rate (Natural Hazards Risk Index)
|
- score_name: Definition N (communities)
|
||||||
label: Expected population loss rate (Natural Hazards Risk Index)
|
label: Identified as disadvantaged without considering neighbors
|
||||||
format: loss_rate_percentage
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for energy burden, is low income, and has a low percent of higher ed students?
|
- score_name: Definition N (communities) (based on adjacency index and low income alone)
|
||||||
label: Greater than or equal to the 90th percentile for energy burden, is low income, and high percent of residents that are not higher ed students?
|
label: Identified as disadvantaged based on neighbors and relaxed low income threshold only
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Energy burden (percentile)
|
- score_name: Identified as disadvantaged due to tribal overlap
|
||||||
label: Energy burden (percentile)
|
label: Identified as disadvantaged due to tribal overlap
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Energy burden
|
- score_name: Definition N community, including adjacency index tracts
|
||||||
label: Energy burden
|
label: Identified as disadvantaged
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for PM2.5 exposure, is low income, and has a low percent of higher ed students?
|
- score_name: Percentage of tract that is disadvantaged
|
||||||
label: Greater than or equal to the 90th percentile for PM2.5 exposure, is low income, and high percent of residents that are not higher ed students?
|
label: Percentage of tract that is disadvantaged by area
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: PM2.5 in the air (percentile)
|
- score_name: Definition N (communities) (average of neighbors)
|
||||||
label: PM2.5 in the air (percentile)
|
label: Share of neighbors that are identified as disadvantaged
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: PM2.5 in the air
|
- score_name: Total population
|
||||||
label: PM2.5 in the air
|
label: Total population
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for diesel particulate matter, is low income, and has a low percent of higher ed students?
|
- score_name: Percent of individuals below 200% Federal Poverty Line, imputed and adjusted (percentile)
|
||||||
label: Greater than or equal to the 90th percentile for diesel particulate matter, is low income, and high percent of residents that are not higher ed students?
|
label: Adjusted percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Diesel particulate matter exposure (percentile)
|
- score_name: Percent of individuals below 200% Federal Poverty Line, imputed and adjusted
|
||||||
label: Diesel particulate matter exposure (percentile)
|
label: Adjusted percent of individuals below 200% Federal Poverty Line
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Diesel particulate matter exposure
|
- score_name: Is low income (imputed and adjusted)?
|
||||||
label: Diesel particulate matter exposure
|
label: Is low income?
|
||||||
format: float
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for traffic proximity, is low income, and has a low percent of higher ed students?
|
- score_name: Income data has been estimated based on neighbor income
|
||||||
label: Greater than or equal to the 90th percentile for traffic proximity, is low income, and high percent of residents that are not higher ed students?
|
label: Income data has been estimated based on geographic neighbor income
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Traffic proximity and volume (percentile)
|
- score_name: Greater than or equal to the 90th percentile for expected agriculture loss rate and is low income?
|
||||||
label: Traffic proximity and volume (percentile)
|
label: Greater than or equal to the 90th percentile for expected agriculture loss rate and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Traffic proximity and volume
|
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Traffic proximity and volume
|
label: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
format: float
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for housing burden, is low income, and has a low percent of higher ed students?
|
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index)
|
||||||
label: Greater than or equal to the 90th percentile for housing burden, is low income, and high percent of residents that are not higher ed students?
|
label: Expected agricultural loss rate (Natural Hazards Risk Index)
|
||||||
format: bool
|
format: loss_rate_percentage
|
||||||
- score_name: Housing burden (percent) (percentile)
|
- score_name: Greater than or equal to the 90th percentile for expected building loss rate and is low income?
|
||||||
label: Housing burden (percent) (percentile)
|
label: Greater than or equal to the 90th percentile for expected building loss rate and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Housing burden (percent)
|
- score_name: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Housing burden (percent)
|
label: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile, is low income, and has a low percent of higher ed students?
|
- score_name: Expected building loss rate (Natural Hazards Risk Index)
|
||||||
label: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile, is low income, and high percent of residents that are not higher ed students?
|
label: Expected building loss rate (Natural Hazards Risk Index)
|
||||||
format: bool
|
format: loss_rate_percentage
|
||||||
- score_name: Percent pre-1960s housing (lead paint indicator) (percentile)
|
- score_name: Greater than or equal to the 90th percentile for expected population loss rate and is low income?
|
||||||
label: Percent pre-1960s housing (lead paint indicator) (percentile)
|
label: Greater than or equal to the 90th percentile for expected population loss rate and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Percent pre-1960s housing (lead paint indicator)
|
- score_name: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Percent pre-1960s housing (lead paint indicator)
|
label: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Median value ($) of owner-occupied housing units (percentile)
|
- score_name: Expected population loss rate (Natural Hazards Risk Index)
|
||||||
label: Median value ($) of owner-occupied housing units (percentile)
|
label: Expected population loss rate (Natural Hazards Risk Index)
|
||||||
format: percentage
|
format: loss_rate_percentage
|
||||||
- score_name: Median value ($) of owner-occupied housing units
|
- score_name: Share of properties at risk of flood in 30 years (percentile)
|
||||||
label: Median value ($) of owner-occupied housing units
|
label: Share of properties at risk of flood in 30 years (percentile)
|
||||||
format: float
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities, is low income, and has a low percent of higher ed students?
|
- score_name: Share of properties at risk of flood in 30 years
|
||||||
label: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities, is low income, and high percent of residents that are not higher ed students?
|
label: Share of properties at risk of flood in 30 years
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: Proximity to hazardous waste sites (percentile)
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years
|
||||||
label: Proximity to hazardous waste sites (percentile)
|
label: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Proximity to hazardous waste sites
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years and is low income?
|
||||||
label: Proximity to hazardous waste sites
|
label: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years and is low income?
|
||||||
format: float
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to superfund sites, is low income, and has a low percent of higher ed students?
|
- score_name: Share of properties at risk of fire in 30 years (percentile)
|
||||||
label: Greater than or equal to the 90th percentile for proximity to superfund sites, is low income, and high percent of residents that are not higher ed students?
|
label: Share of properties at risk of fire in 30 years (percentile)
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: Proximity to NPL sites (percentile)
|
- score_name: Share of properties at risk of fire in 30 years
|
||||||
label: Proximity to NPL (Superfund) sites (percentile)
|
label: Share of properties at risk of fire in 30 years
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Proximity to NPL sites
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years
|
||||||
label: Proximity to NPL (Superfund) sites
|
label: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years
|
||||||
format: float
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to RMP sites, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for proximity to RMP sites, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
- score_name: Greater than or equal to the 90th percentile for energy burden and is low income?
|
||||||
label: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
label: Greater than or equal to the 90th percentile for energy burden and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Proximity to Risk Management Plan (RMP) facilities
|
- score_name: Energy burden (percentile)
|
||||||
label: Proximity to Risk Management Plan (RMP) facilities
|
label: Energy burden (percentile)
|
||||||
format: float
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for wastewater discharge, is low income, and has a low percent of higher ed students?
|
- score_name: Energy burden
|
||||||
label: Greater than or equal to the 90th percentile for wastewater discharge, is low income, and high percent of residents that are not higher ed students?
|
label: Energy burden
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: Wastewater discharge (percentile)
|
- score_name: Greater than or equal to the 90th percentile for PM2.5 exposure and is low income?
|
||||||
label: Wastewater discharge (percentile)
|
label: Greater than or equal to the 90th percentile for PM2.5 exposure and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Wastewater discharge
|
- score_name: PM2.5 in the air (percentile)
|
||||||
label: Wastewater discharge
|
label: PM2.5 in the air (percentile)
|
||||||
format: float
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for asthma, is low income, and has a low percent of higher ed students?
|
- score_name: PM2.5 in the air
|
||||||
label: Greater than or equal to the 90th percentile for asthma, is low income, and high percent of residents that are not higher ed students?
|
label: PM2.5 in the air
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Greater than or equal to the 90th percentile for diesel particulate matter and is low income?
|
||||||
label: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
label: Greater than or equal to the 90th percentile for diesel particulate matter and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Current asthma among adults aged greater than or equal to 18 years
|
- score_name: Diesel particulate matter exposure (percentile)
|
||||||
label: Current asthma among adults aged greater than or equal to 18 years
|
label: Diesel particulate matter exposure (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for diabetes, is low income, and has a low percent of higher ed students?
|
- score_name: Diesel particulate matter exposure
|
||||||
label: Greater than or equal to the 90th percentile for diabetes, is low income, and high percent of residents that are not higher ed students?
|
label: Diesel particulate matter exposure
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Greater than or equal to the 90th percentile for traffic proximity and is low income?
|
||||||
label: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
label: Greater than or equal to the 90th percentile for traffic proximity and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
- score_name: Traffic proximity and volume (percentile)
|
||||||
label: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
label: Traffic proximity and volume (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for heart disease, is low income, and has a low percent of higher ed students?
|
- score_name: Traffic proximity and volume
|
||||||
label: Greater than or equal to the 90th percentile for heart disease, is low income, and high percent of residents that are not higher ed students?
|
label: Traffic proximity and volume
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Greater than or equal to the 90th percentile for DOT transit barriers and is low income?
|
||||||
label: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
label: Greater than or equal to the 90th percentile for DOT transit barriers and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years
|
- score_name: DOT Travel Barriers Score (percentile)
|
||||||
label: Coronary heart disease among adults aged greater than or equal to 18 years
|
label: DOT Travel Barriers Score (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for low life expectancy, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for housing burden and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for low life expectancy, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for housing burden and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Low life expectancy (percentile)
|
- score_name: Housing burden (percent) (percentile)
|
||||||
label: Low life expectancy (percentile)
|
label: Housing burden (percent) (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Life expectancy (years)
|
- score_name: Housing burden (percent)
|
||||||
label: Life expectancy (years)
|
label: Housing burden (percent)
|
||||||
format: float
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for lead paint and the median house value is less than 90th percentile and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Low median household income as a percent of area median income (percentile)
|
- score_name: Percent pre-1960s housing (lead paint indicator) (percentile)
|
||||||
label: Low median household income as a percent of area median income (percentile)
|
label: Percent pre-1960s housing (lead paint indicator) (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Median household income as a percent of area median income
|
- score_name: Percent pre-1960s housing (lead paint indicator)
|
||||||
label: Median household income as a percent of area median income
|
label: Percent pre-1960s housing (lead paint indicator)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Median value ($) of owner-occupied housing units (percentile)
|
||||||
label: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Median value ($) of owner-occupied housing units (percentile)
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: Linguistic isolation (percent) (percentile)
|
- score_name: Median value ($) of owner-occupied housing units
|
||||||
label: Linguistic isolation (percent) (percentile)
|
label: Median value ($) of owner-occupied housing units
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Linguistic isolation (percent)
|
- score_name: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent and is low income?
|
||||||
label: Linguistic isolation (percent)
|
label: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
label: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Unemployment (percent) (percentile)
|
- score_name: Share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
label: Unemployment (percent) (percentile)
|
label: Share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Unemployment (percent)
|
- score_name: Share of the tract's land area that is covered by impervious surface or cropland as a percent (percentile)
|
||||||
label: Unemployment (percent)
|
label: Share of the tract's land area that is covered by impervious surface or cropland as a percent (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Does the tract have at least 35 acres in it?
|
||||||
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Does the tract have at least 35 acres in it?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Percent of individuals below 200% Federal Poverty Line (percentile)
|
- score_name: Tract-level redlining score meets or exceeds 3.25 and is low income
|
||||||
label: Percent of individuals below 200% Federal Poverty Line (percentile)
|
label: Tract experienced historic underinvestment and remains low income
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Percent of individuals below 200% Federal Poverty Line
|
- score_name: Tract-level redlining score meets or exceeds 3.25
|
||||||
label: Percent of individuals below 200% Federal Poverty Line
|
label: Tract experienced historic underinvestment
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Percent of individuals < 100% Federal Poverty Line (percentile)
|
- score_name: Share of homes with no kitchen or indoor plumbing (percent) (percentile)
|
||||||
label: Percent of individuals < 100% Federal Poverty Line (percentile)
|
label: Share of homes with no kitchen or indoor plumbing (percentile)
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Percent of individuals < 100% Federal Poverty Line
|
- score_name: Share of homes with no kitchen or indoor plumbing (percent)
|
||||||
label: Percent of individuals < 100% Federal Poverty Line
|
label: Share of homes with no kitchen or indoor plumbing (percent)
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Percent individuals age 25 or over with less than high school degree (percentile)
|
- score_name: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities and is low income?
|
||||||
label: Percent individuals age 25 or over with less than high school degree (percentile)
|
label: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Percent individuals age 25 or over with less than high school degree
|
- score_name: Proximity to hazardous waste sites (percentile)
|
||||||
label: Percent individuals age 25 or over with less than high school degree
|
label: Proximity to hazardous waste sites (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Unemployment (percent) in 2009 (island areas) and 2010 (states and PR)
|
- score_name: Proximity to hazardous waste sites
|
||||||
label: Unemployment (percent) in 2009 (island areas) and 2010 (states and PR)
|
label: Proximity to hazardous waste sites
|
||||||
format: percentage
|
format: float
|
||||||
- score_name: Percentage households below 100% of federal poverty line in 2009 (island areas) and 2010 (states and PR)
|
- score_name: Greater than or equal to the 90th percentile for proximity to superfund sites and is low income?
|
||||||
label: Percentage households below 100% of federal poverty line in 2009 (island areas) and 2010 (states and PR)
|
label: Greater than or equal to the 90th percentile for proximity to superfund sites and is low income?
|
||||||
format: percentage
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for unemployment and has low HS education in 2009 (island areas)?
|
- score_name: Proximity to NPL sites (percentile)
|
||||||
label: Greater than or equal to the 90th percentile for unemployment and has low HS education in 2009 (island areas)?
|
label: Proximity to NPL (Superfund) sites (percentile)
|
||||||
format: bool
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS education in 2009 (island areas)?
|
- score_name: Proximity to NPL sites
|
||||||
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS education in 2009 (island areas)?
|
label: Proximity to NPL (Superfund) sites
|
||||||
format: bool
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
- score_name: Greater than or equal to the 90th percentile for proximity to RMP sites and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
label: Greater than or equal to the 90th percentile for proximity to RMP sites and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Percent of population not currently enrolled in college or graduate school
|
- score_name: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
||||||
label: Percent of residents who are not currently enrolled in higher ed
|
label: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
||||||
format: percentage
|
format: percentage
|
||||||
|
- score_name: Proximity to Risk Management Plan (RMP) facilities
|
||||||
|
label: Proximity to Risk Management Plan (RMP) facilities
|
||||||
|
format: float
|
||||||
|
- score_name: Is there at least one Formerly Used Defense Site (FUDS) in the tract?
|
||||||
|
label: Is there at least one Formerly Used Defense Site (FUDS) in the tract?
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one abandoned mine in this census tract?
|
||||||
|
label: Is there at least one abandoned mine in this census tract?
|
||||||
|
format: bool
|
||||||
|
- score_name: There is at least one abandoned mine in this census tract and the tract is low income.
|
||||||
|
label: There is at least one abandoned mine in this census tract and the tract is low income.
|
||||||
|
format: bool
|
||||||
|
- score_name: There is at least one Formerly Used Defense Site (FUDS) in the tract and the tract is low income.
|
||||||
|
label: There is at least one Formerly Used Defense Site (FUDS) in the tract and the tract is low income.
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one Formerly Used Defense Site (FUDS) in the tract, where missing data is treated as False?
|
||||||
|
label: Is there at least one Formerly Used Defense Site (FUDS) in the tract, where missing data is treated as False?
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one abandoned mine in this census tract, where missing data is treated as False?
|
||||||
|
label: Is there at least one abandoned mine in this census tract, where missing data is treated as False?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for wastewater discharge and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for wastewater discharge and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Wastewater discharge (percentile)
|
||||||
|
label: Wastewater discharge (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Wastewater discharge
|
||||||
|
label: Wastewater discharge
|
||||||
|
format: float
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for leaky underground storage tanks and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for leaky underground storage tanks and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Leaky underground storage tanks (percentile)
|
||||||
|
label: Leaky underground storage tanks (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Leaky underground storage tanks
|
||||||
|
label: Leaky underground storage tanks
|
||||||
|
format: float
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for asthma and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for asthma and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
label: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Current asthma among adults aged greater than or equal to 18 years
|
||||||
|
label: Current asthma among adults aged greater than or equal to 18 years
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for diabetes and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for diabetes and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
label: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
||||||
|
label: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for heart disease and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for heart disease and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
label: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years
|
||||||
|
label: Coronary heart disease among adults aged greater than or equal to 18 years
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for low life expectancy and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for low life expectancy and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Low life expectancy (percentile)
|
||||||
|
label: Low life expectancy (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Life expectancy (years)
|
||||||
|
label: Life expectancy (years)
|
||||||
|
format: float
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS attainment?
|
||||||
|
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS attainment?
|
||||||
|
format: bool
|
||||||
|
- score_name: Low median household income as a percent of area median income (percentile)
|
||||||
|
label: Low median household income as a percent of area median income (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Median household income as a percent of area median income
|
||||||
|
label: Median household income as a percent of area median income
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation and has low HS attainment?
|
||||||
|
label: Greater than or equal to the 90th percentile for households in linguistic isolation and has low HS attainment?
|
||||||
|
format: bool
|
||||||
|
- score_name: Linguistic isolation (percent) (percentile)
|
||||||
|
label: Linguistic isolation (percent) (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Linguistic isolation (percent)
|
||||||
|
label: Linguistic isolation (percent)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for unemployment and has low HS attainment?
|
||||||
|
label: Greater than or equal to the 90th percentile for unemployment and has low HS attainment?
|
||||||
|
format: bool
|
||||||
|
- score_name: Unemployment (percent) (percentile)
|
||||||
|
label: Unemployment (percent) (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Unemployment (percent)
|
||||||
|
label: Unemployment (percent)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS attainment?
|
||||||
|
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS attainment?
|
||||||
|
format: bool
|
||||||
|
- score_name: Percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
|
label: Percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent of individuals below 200% Federal Poverty Line
|
||||||
|
label: Percent of individuals below 200% Federal Poverty Line
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent of individuals < 100% Federal Poverty Line (percentile)
|
||||||
|
label: Percent of individuals < 100% Federal Poverty Line (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent of individuals < 100% Federal Poverty Line
|
||||||
|
label: Percent of individuals < 100% Federal Poverty Line
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent individuals age 25 or over with less than high school degree (percentile)
|
||||||
|
label: Percent individuals age 25 or over with less than high school degree (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent individuals age 25 or over with less than high school degree
|
||||||
|
label: Percent individuals age 25 or over with less than high school degree
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percent of population not currently enrolled in college or graduate school
|
||||||
|
label: Percent of residents who are not currently enrolled in higher ed
|
||||||
|
format: percentage
|
||||||
|
- score_name: Unemployment (percent) in 2009 (island areas) and 2010 (states and PR)
|
||||||
|
label: Unemployment (percent) in 2009 (island areas) and 2010 (states and PR)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Percentage households below 100% of federal poverty line in 2009 (island areas) and 2010 (states and PR)
|
||||||
|
label: Percentage households below 100% of federal poverty line in 2009 (island areas) and 2010 (states and PR)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for unemployment and has low HS education in 2009 (island areas)?
|
||||||
|
label: Greater than or equal to the 90th percentile for unemployment and has low HS education in 2009 (island areas)?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS education in 2009 (island areas)?
|
||||||
|
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS education in 2009 (island areas)?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
||||||
|
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
||||||
|
format: bool
|
||||||
|
- score_name: Number of Tribal areas within Census tract for Alaska
|
||||||
|
label: Number of Tribal areas within Census tract for Alaska
|
||||||
|
format: int64
|
||||||
|
- score_name: Names of Tribal areas within Census tract
|
||||||
|
label: Names of Tribal areas within Census tract
|
||||||
|
format: string
|
||||||
|
- score_name: Percent of the Census tract that is within Tribal areas
|
||||||
|
label: Percent of the Census tract that is within Tribal areas
|
||||||
|
format: percentage
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
---
|
---
|
||||||
global_config:
|
global_config:
|
||||||
sort_by_label: Census tract ID
|
sort_by_label: Census tract 2010 ID
|
||||||
rounding_num:
|
rounding_num:
|
||||||
float: 2
|
float: 2
|
||||||
loss_rate_percentage: 4
|
loss_rate_percentage: 4
|
||||||
|
@ -10,7 +10,7 @@ sheets:
|
||||||
- label: "Data"
|
- label: "Data"
|
||||||
fields:
|
fields:
|
||||||
- score_name: GEOID10_TRACT
|
- score_name: GEOID10_TRACT
|
||||||
label: Census tract ID
|
label: Census tract 2010 ID
|
||||||
format: string
|
format: string
|
||||||
- score_name: County Name
|
- score_name: County Name
|
||||||
label: County Name
|
label: County Name
|
||||||
|
@ -18,23 +18,80 @@ sheets:
|
||||||
- score_name: State/Territory
|
- score_name: State/Territory
|
||||||
label: State/Territory
|
label: State/Territory
|
||||||
format: string
|
format: string
|
||||||
|
- score_name: Percent Black or African American
|
||||||
|
label: Percent Black or African American alone
|
||||||
|
format: float
|
||||||
|
- score_name: Percent American Indian / Alaska Native
|
||||||
|
label: Percent American Indian / Alaska Native
|
||||||
|
format: float
|
||||||
|
- score_name: Percent Asian
|
||||||
|
label: Percent Asian
|
||||||
|
format: float
|
||||||
|
- score_name: Percent Native Hawaiian or Pacific
|
||||||
|
label: Percent Native Hawaiian or Pacific
|
||||||
|
format: float
|
||||||
|
- score_name: Percent two or more races
|
||||||
|
label: Percent two or more races
|
||||||
|
format: float
|
||||||
|
- score_name: Percent White
|
||||||
|
label: Percent White
|
||||||
|
format: float
|
||||||
|
- score_name: Percent Hispanic or Latino
|
||||||
|
label: Percent Hispanic or Latino
|
||||||
|
format: float
|
||||||
|
- score_name: Percent other races
|
||||||
|
label: Percent other races
|
||||||
|
format: float
|
||||||
|
- score_name: Percent age under 10
|
||||||
|
label: Percent age under 10
|
||||||
|
format: float
|
||||||
|
- score_name: Percent age 10 to 64
|
||||||
|
label: Percent age 10 to 64
|
||||||
|
format: float
|
||||||
|
- score_name: Percent age over 64
|
||||||
|
label: Percent age over 64
|
||||||
|
format: float
|
||||||
- score_name: Total threshold criteria exceeded
|
- score_name: Total threshold criteria exceeded
|
||||||
label: Total threshold criteria exceeded
|
label: Total threshold criteria exceeded
|
||||||
format: int64
|
format: int64
|
||||||
- score_name: Total categories exceeded
|
- score_name: Total categories exceeded
|
||||||
label: Total categories exceeded
|
label: Total categories exceeded
|
||||||
format: int64
|
format: int64
|
||||||
- score_name: Definition M (communities)
|
- score_name: Definition N (communities)
|
||||||
|
label: Identified as disadvantaged without considering neighbors
|
||||||
|
format: bool
|
||||||
|
- score_name: Definition N (communities) (based on adjacency index and low income alone)
|
||||||
|
label: Identified as disadvantaged based on neighbors and relaxed low income threshold only
|
||||||
|
format: bool
|
||||||
|
- score_name: Identified as disadvantaged due to tribal overlap
|
||||||
|
label: Identified as disadvantaged due to tribal overlap
|
||||||
|
format: bool
|
||||||
|
- score_name: Definition N community, including adjacency index tracts
|
||||||
label: Identified as disadvantaged
|
label: Identified as disadvantaged
|
||||||
format: bool
|
format: bool
|
||||||
|
- score_name: Percentage of tract that is disadvantaged
|
||||||
|
label: Percentage of tract that is disadvantaged by area
|
||||||
|
format: percentage
|
||||||
|
- score_name: Definition N (communities) (average of neighbors)
|
||||||
|
label: Share of neighbors that are identified as disadvantaged
|
||||||
|
format: percentage
|
||||||
- score_name: Total population
|
- score_name: Total population
|
||||||
label: Total population
|
label: Total population
|
||||||
format: float
|
format: float
|
||||||
- score_name: Is low income and has a low percent of higher ed students?
|
- score_name: Percent of individuals below 200% Federal Poverty Line, imputed and adjusted (percentile)
|
||||||
label: Is low income and high percent of residents that are not higher ed students?
|
label: Adjusted percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
|
format: float
|
||||||
|
- score_name: Percent of individuals below 200% Federal Poverty Line, imputed and adjusted
|
||||||
|
label: Adjusted percent of individuals below 200% Federal Poverty Line
|
||||||
|
format: float
|
||||||
|
- score_name: Is low income (imputed and adjusted)?
|
||||||
|
label: Is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected agriculture loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Income data has been estimated based on neighbor income
|
||||||
label: Greater than or equal to the 90th percentile for expected agriculture loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Income data has been estimated based on geographic neighbor income
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for expected agriculture loss rate and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for expected agriculture loss rate and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
label: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
|
@ -42,8 +99,8 @@ sheets:
|
||||||
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index)
|
- score_name: Expected agricultural loss rate (Natural Hazards Risk Index)
|
||||||
label: Expected agricultural loss rate (Natural Hazards Risk Index)
|
label: Expected agricultural loss rate (Natural Hazards Risk Index)
|
||||||
format: loss_rate_percentage
|
format: loss_rate_percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected building loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for expected building loss rate and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for expected building loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for expected building loss rate and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
label: Expected building loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
|
@ -51,8 +108,8 @@ sheets:
|
||||||
- score_name: Expected building loss rate (Natural Hazards Risk Index)
|
- score_name: Expected building loss rate (Natural Hazards Risk Index)
|
||||||
label: Expected building loss rate (Natural Hazards Risk Index)
|
label: Expected building loss rate (Natural Hazards Risk Index)
|
||||||
format: loss_rate_percentage
|
format: loss_rate_percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for expected population loss rate, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for expected population loss rate and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for expected population loss rate, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for expected population loss rate and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
- score_name: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
label: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
label: Expected population loss rate (Natural Hazards Risk Index) (percentile)
|
||||||
|
@ -60,8 +117,32 @@ sheets:
|
||||||
- score_name: Expected population loss rate (Natural Hazards Risk Index)
|
- score_name: Expected population loss rate (Natural Hazards Risk Index)
|
||||||
label: Expected population loss rate (Natural Hazards Risk Index)
|
label: Expected population loss rate (Natural Hazards Risk Index)
|
||||||
format: loss_rate_percentage
|
format: loss_rate_percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for energy burden, is low income, and has a low percent of higher ed students?
|
- score_name: Share of properties at risk of flood in 30 years (percentile)
|
||||||
label: Greater than or equal to the 90th percentile for energy burden, is low income, and high percent of residents that are not higher ed students?
|
label: Share of properties at risk of flood in 30 years (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Share of properties at risk of flood in 30 years
|
||||||
|
label: Share of properties at risk of flood in 30 years
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years
|
||||||
|
label: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for share of properties at risk of flood in 30 years and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Share of properties at risk of fire in 30 years (percentile)
|
||||||
|
label: Share of properties at risk of fire in 30 years (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Share of properties at risk of fire in 30 years
|
||||||
|
label: Share of properties at risk of fire in 30 years
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years
|
||||||
|
label: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for share of properties at risk of fire in 30 years and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for energy burden and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for energy burden and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Energy burden (percentile)
|
- score_name: Energy burden (percentile)
|
||||||
label: Energy burden (percentile)
|
label: Energy burden (percentile)
|
||||||
|
@ -69,8 +150,8 @@ sheets:
|
||||||
- score_name: Energy burden
|
- score_name: Energy burden
|
||||||
label: Energy burden
|
label: Energy burden
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for PM2.5 exposure, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for PM2.5 exposure and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for PM2.5 exposure, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for PM2.5 exposure and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: PM2.5 in the air (percentile)
|
- score_name: PM2.5 in the air (percentile)
|
||||||
label: PM2.5 in the air (percentile)
|
label: PM2.5 in the air (percentile)
|
||||||
|
@ -78,8 +159,8 @@ sheets:
|
||||||
- score_name: PM2.5 in the air
|
- score_name: PM2.5 in the air
|
||||||
label: PM2.5 in the air
|
label: PM2.5 in the air
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for diesel particulate matter, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for diesel particulate matter and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for diesel particulate matter, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for diesel particulate matter and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Diesel particulate matter exposure (percentile)
|
- score_name: Diesel particulate matter exposure (percentile)
|
||||||
label: Diesel particulate matter exposure (percentile)
|
label: Diesel particulate matter exposure (percentile)
|
||||||
|
@ -87,8 +168,8 @@ sheets:
|
||||||
- score_name: Diesel particulate matter exposure
|
- score_name: Diesel particulate matter exposure
|
||||||
label: Diesel particulate matter exposure
|
label: Diesel particulate matter exposure
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for traffic proximity, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for traffic proximity and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for traffic proximity, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for traffic proximity and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Traffic proximity and volume (percentile)
|
- score_name: Traffic proximity and volume (percentile)
|
||||||
label: Traffic proximity and volume (percentile)
|
label: Traffic proximity and volume (percentile)
|
||||||
|
@ -96,8 +177,14 @@ sheets:
|
||||||
- score_name: Traffic proximity and volume
|
- score_name: Traffic proximity and volume
|
||||||
label: Traffic proximity and volume
|
label: Traffic proximity and volume
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for housing burden, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for DOT transit barriers and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for housing burden, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for DOT transit barriers and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: DOT Travel Barriers Score (percentile)
|
||||||
|
label: DOT Travel Barriers Score (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for housing burden and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for housing burden and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Housing burden (percent) (percentile)
|
- score_name: Housing burden (percent) (percentile)
|
||||||
label: Housing burden (percent) (percentile)
|
label: Housing burden (percent) (percentile)
|
||||||
|
@ -105,8 +192,8 @@ sheets:
|
||||||
- score_name: Housing burden (percent)
|
- score_name: Housing burden (percent)
|
||||||
label: Housing burden (percent)
|
label: Housing burden (percent)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for lead paint and the median house value is less than 90th percentile and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for lead paint, the median house value is less than 90th percentile and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Percent pre-1960s housing (lead paint indicator) (percentile)
|
- score_name: Percent pre-1960s housing (lead paint indicator) (percentile)
|
||||||
label: Percent pre-1960s housing (lead paint indicator) (percentile)
|
label: Percent pre-1960s housing (lead paint indicator) (percentile)
|
||||||
|
@ -120,8 +207,35 @@ sheets:
|
||||||
- score_name: Median value ($) of owner-occupied housing units
|
- score_name: Median value ($) of owner-occupied housing units
|
||||||
label: Median value ($) of owner-occupied housing units
|
label: Median value ($) of owner-occupied housing units
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
|
label: Greater than or equal to the 90th percentile for share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
|
format: bool
|
||||||
|
- score_name: Share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
|
label: Share of the tract's land area that is covered by impervious surface or cropland as a percent
|
||||||
|
format: percentage
|
||||||
|
- score_name: Share of the tract's land area that is covered by impervious surface or cropland as a percent (percentile)
|
||||||
|
label: Share of the tract's land area that is covered by impervious surface or cropland as a percent (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Does the tract have at least 35 acres in it?
|
||||||
|
label: Does the tract have at least 35 acres in it?
|
||||||
|
format: bool
|
||||||
|
- score_name: Tract-level redlining score meets or exceeds 3.25 and is low income
|
||||||
|
label: Tract experienced historic underinvestment and remains low income
|
||||||
|
format: bool
|
||||||
|
- score_name: Tract-level redlining score meets or exceeds 3.25
|
||||||
|
label: Tract experienced historic underinvestment
|
||||||
|
format: bool
|
||||||
|
- score_name: Share of homes with no kitchen or indoor plumbing (percent) (percentile)
|
||||||
|
label: Share of homes with no kitchen or indoor plumbing (percentile)
|
||||||
|
format: float
|
||||||
|
- score_name: Share of homes with no kitchen or indoor plumbing (percent)
|
||||||
|
label: Share of homes with no kitchen or indoor plumbing (percent)
|
||||||
|
format: float
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for proximity to hazardous waste facilities and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Proximity to hazardous waste sites (percentile)
|
- score_name: Proximity to hazardous waste sites (percentile)
|
||||||
label: Proximity to hazardous waste sites (percentile)
|
label: Proximity to hazardous waste sites (percentile)
|
||||||
|
@ -129,8 +243,8 @@ sheets:
|
||||||
- score_name: Proximity to hazardous waste sites
|
- score_name: Proximity to hazardous waste sites
|
||||||
label: Proximity to hazardous waste sites
|
label: Proximity to hazardous waste sites
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to superfund sites, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for proximity to superfund sites and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for proximity to superfund sites, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for proximity to superfund sites and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Proximity to NPL sites (percentile)
|
- score_name: Proximity to NPL sites (percentile)
|
||||||
label: Proximity to NPL (Superfund) sites (percentile)
|
label: Proximity to NPL (Superfund) sites (percentile)
|
||||||
|
@ -138,8 +252,8 @@ sheets:
|
||||||
- score_name: Proximity to NPL sites
|
- score_name: Proximity to NPL sites
|
||||||
label: Proximity to NPL (Superfund) sites
|
label: Proximity to NPL (Superfund) sites
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for proximity to RMP sites, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for proximity to RMP sites and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for proximity to RMP sites, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for proximity to RMP sites and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
- score_name: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
||||||
label: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
label: Proximity to Risk Management Plan (RMP) facilities (percentile)
|
||||||
|
@ -147,8 +261,26 @@ sheets:
|
||||||
- score_name: Proximity to Risk Management Plan (RMP) facilities
|
- score_name: Proximity to Risk Management Plan (RMP) facilities
|
||||||
label: Proximity to Risk Management Plan (RMP) facilities
|
label: Proximity to Risk Management Plan (RMP) facilities
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for wastewater discharge, is low income, and has a low percent of higher ed students?
|
- score_name: Is there at least one Formerly Used Defense Site (FUDS) in the tract?
|
||||||
label: Greater than or equal to the 90th percentile for wastewater discharge, is low income, and high percent of residents that are not higher ed students?
|
label: Is there at least one Formerly Used Defense Site (FUDS) in the tract?
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one abandoned mine in this census tract?
|
||||||
|
label: Is there at least one abandoned mine in this census tract?
|
||||||
|
format: bool
|
||||||
|
- score_name: There is at least one abandoned mine in this census tract and the tract is low income.
|
||||||
|
label: There is at least one abandoned mine in this census tract and the tract is low income.
|
||||||
|
format: bool
|
||||||
|
- score_name: There is at least one Formerly Used Defense Site (FUDS) in the tract and the tract is low income.
|
||||||
|
label: There is at least one Formerly Used Defense Site (FUDS) in the tract and the tract is low income.
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one Formerly Used Defense Site (FUDS) in the tract, where missing data is treated as False?
|
||||||
|
label: Is there at least one Formerly Used Defense Site (FUDS) in the tract, where missing data is treated as False?
|
||||||
|
format: bool
|
||||||
|
- score_name: Is there at least one abandoned mine in this census tract, where missing data is treated as False?
|
||||||
|
label: Is there at least one abandoned mine in this census tract, where missing data is treated as False?
|
||||||
|
format: bool
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for wastewater discharge and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for wastewater discharge and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Wastewater discharge (percentile)
|
- score_name: Wastewater discharge (percentile)
|
||||||
label: Wastewater discharge (percentile)
|
label: Wastewater discharge (percentile)
|
||||||
|
@ -156,8 +288,17 @@ sheets:
|
||||||
- score_name: Wastewater discharge
|
- score_name: Wastewater discharge
|
||||||
label: Wastewater discharge
|
label: Wastewater discharge
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for asthma, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for leaky underground storage tanks and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for asthma, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for leaky underground storage tanks and is low income?
|
||||||
|
format: bool
|
||||||
|
- score_name: Leaky underground storage tanks (percentile)
|
||||||
|
label: Leaky underground storage tanks (percentile)
|
||||||
|
format: percentage
|
||||||
|
- score_name: Leaky underground storage tanks
|
||||||
|
label: Leaky underground storage tanks
|
||||||
|
format: float
|
||||||
|
- score_name: Greater than or equal to the 90th percentile for asthma and is low income?
|
||||||
|
label: Greater than or equal to the 90th percentile for asthma and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
||||||
label: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
label: Current asthma among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
@ -165,8 +306,8 @@ sheets:
|
||||||
- score_name: Current asthma among adults aged greater than or equal to 18 years
|
- score_name: Current asthma among adults aged greater than or equal to 18 years
|
||||||
label: Current asthma among adults aged greater than or equal to 18 years
|
label: Current asthma among adults aged greater than or equal to 18 years
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for diabetes, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for diabetes and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for diabetes, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for diabetes and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
||||||
label: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
label: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
@ -174,8 +315,8 @@ sheets:
|
||||||
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
- score_name: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
||||||
label: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
label: Diagnosed diabetes among adults aged greater than or equal to 18 years
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for heart disease, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for heart disease and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for heart disease, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for heart disease and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
||||||
label: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
label: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)
|
||||||
|
@ -183,8 +324,8 @@ sheets:
|
||||||
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years
|
- score_name: Coronary heart disease among adults aged greater than or equal to 18 years
|
||||||
label: Coronary heart disease among adults aged greater than or equal to 18 years
|
label: Coronary heart disease among adults aged greater than or equal to 18 years
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for low life expectancy, is low income, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for low life expectancy and is low income?
|
||||||
label: Greater than or equal to the 90th percentile for low life expectancy, is low income, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for low life expectancy and is low income?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Low life expectancy (percentile)
|
- score_name: Low life expectancy (percentile)
|
||||||
label: Low life expectancy (percentile)
|
label: Low life expectancy (percentile)
|
||||||
|
@ -192,8 +333,8 @@ sheets:
|
||||||
- score_name: Life expectancy (years)
|
- score_name: Life expectancy (years)
|
||||||
label: Life expectancy (years)
|
label: Life expectancy (years)
|
||||||
format: float
|
format: float
|
||||||
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS attainment?
|
||||||
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS attainment?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Low median household income as a percent of area median income (percentile)
|
- score_name: Low median household income as a percent of area median income (percentile)
|
||||||
label: Low median household income as a percent of area median income (percentile)
|
label: Low median household income as a percent of area median income (percentile)
|
||||||
|
@ -201,8 +342,8 @@ sheets:
|
||||||
- score_name: Median household income as a percent of area median income
|
- score_name: Median household income as a percent of area median income
|
||||||
label: Median household income as a percent of area median income
|
label: Median household income as a percent of area median income
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation and has low HS attainment?
|
||||||
label: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for households in linguistic isolation and has low HS attainment?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Linguistic isolation (percent) (percentile)
|
- score_name: Linguistic isolation (percent) (percentile)
|
||||||
label: Linguistic isolation (percent) (percentile)
|
label: Linguistic isolation (percent) (percentile)
|
||||||
|
@ -210,8 +351,8 @@ sheets:
|
||||||
- score_name: Linguistic isolation (percent)
|
- score_name: Linguistic isolation (percent)
|
||||||
label: Linguistic isolation (percent)
|
label: Linguistic isolation (percent)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for unemployment and has low HS attainment?
|
||||||
label: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for unemployment and has low HS attainment?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Unemployment (percent) (percentile)
|
- score_name: Unemployment (percent) (percentile)
|
||||||
label: Unemployment (percent) (percentile)
|
label: Unemployment (percent) (percentile)
|
||||||
|
@ -219,8 +360,8 @@ sheets:
|
||||||
- score_name: Unemployment (percent)
|
- score_name: Unemployment (percent)
|
||||||
label: Unemployment (percent)
|
label: Unemployment (percent)
|
||||||
format: percentage
|
format: percentage
|
||||||
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS attainment?
|
||||||
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and high percent of residents that are not higher ed students?
|
label: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS attainment?
|
||||||
format: bool
|
format: bool
|
||||||
- score_name: Percent of individuals below 200% Federal Poverty Line (percentile)
|
- score_name: Percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
label: Percent of individuals below 200% Federal Poverty Line (percentile)
|
label: Percent of individuals below 200% Federal Poverty Line (percentile)
|
||||||
|
@ -258,3 +399,12 @@ sheets:
|
||||||
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
||||||
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
label: Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?
|
||||||
format: bool
|
format: bool
|
||||||
|
- score_name: Number of Tribal areas within Census tract for Alaska
|
||||||
|
label: Number of Tribal areas within Census tract for Alaska
|
||||||
|
format: int64
|
||||||
|
- score_name: Names of Tribal areas within Census tract
|
||||||
|
label: Names of Tribal areas within Census tract
|
||||||
|
format: string
|
||||||
|
- score_name: Percent of the Census tract that is within Tribal areas
|
||||||
|
label: Percent of the Census tract that is within Tribal areas
|
||||||
|
format: percentage
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# This is a temporary file. We should make sure this *type* of information is maintained when we refactor.
|
# This is a temporary file. We should make sure this *type* of information is maintained when we refactor.
|
||||||
fields:
|
fields:
|
||||||
- score_name: Total threshold criteria exceeded
|
- score_name: Total threshold criteria exceeded
|
||||||
notes: Lists out the total number of criteria (where each category has one or more criteria) exceeded. For example, a tract that exceeds the 90th percentile for linguistic isolation (1) and unemployment (2), and meets the training and workforce development socioeconomic criteria (high school attainment rate and low percentage of higher ed students) would have a 2 in this field.
|
notes: Lists out the total number of criteria (where each category has one or more criteria) exceeded. For example, a tract that exceeds the 90th percentile for linguistic isolation (1) and unemployment (2), and meets the training and workforce development socioeconomic criteria (high school attainment rate and low percentage of higher ed students) would have a 2 in this field.
|
||||||
- score_name: Definition M (communities)
|
- score_name: Definition M (communities)
|
||||||
notes: True / False variable for whether a tract is a Disadvantaged Community (DAC)
|
notes: True / False variable for whether a tract is a Disadvantaged Community (DAC)
|
||||||
- score_name: Is low income and has a low percent of higher ed students?
|
- score_name: Is low income and has a low percent of higher ed students?
|
||||||
|
@ -43,7 +43,7 @@ fields:
|
||||||
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for low median household income as a percent of area median income, has low HS attainment, and has a low percent of higher ed students?
|
||||||
category: training and workforce development
|
category: training and workforce development
|
||||||
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for households in linguistic isolation, has low HS attainment, and has a low percent of higher ed students?
|
||||||
category: training and workforce development
|
category: training and workforce development
|
||||||
- score_name: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for unemployment, has low HS attainment, and has a low percent of higher ed students?
|
||||||
category: training and workforce development
|
category: training and workforce development
|
||||||
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and has a low percent of higher ed students?
|
- score_name: Greater than or equal to the 90th percentile for households at or below 100% federal poverty level, has low HS attainment, and has a low percent of higher ed students?
|
||||||
|
|
800
data/data-pipeline/data_pipeline/content/config/scratch.ipynb
Normal file
800
data/data-pipeline/data_pipeline/content/config/scratch.ipynb
Normal file
|
@ -0,0 +1,800 @@
|
||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "cf8f39b0-7735-4f7c-9178-61bbf2257951",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"\n",
|
||||||
|
"%load_ext lab_black"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "66639c20-be5e-4bf6-9b58-98338874f7cc",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'Median value ($) of owner-occupied housing units (percentile)'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"check = pd.read_csv(\n",
|
||||||
|
" \"/Users/emmausds/j40/data_pipeline/data/score/downloadable/codebook.csv\"\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "5e525e4e-6764-4d4d-9119-b4d400ba022f",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/html": [
|
||||||
|
"<div>\n",
|
||||||
|
"<style scoped>\n",
|
||||||
|
" .dataframe tbody tr th:only-of-type {\n",
|
||||||
|
" vertical-align: middle;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe tbody tr th {\n",
|
||||||
|
" vertical-align: top;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe thead th {\n",
|
||||||
|
" text-align: right;\n",
|
||||||
|
" }\n",
|
||||||
|
"</style>\n",
|
||||||
|
"<table border=\"1\" class=\"dataframe\">\n",
|
||||||
|
" <thead>\n",
|
||||||
|
" <tr style=\"text-align: right;\">\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th>score_name</th>\n",
|
||||||
|
" <th>csv_field_type</th>\n",
|
||||||
|
" <th>csv_label</th>\n",
|
||||||
|
" <th>excel_label</th>\n",
|
||||||
|
" <th>calculation_notes</th>\n",
|
||||||
|
" <th>threshold_category</th>\n",
|
||||||
|
" <th>notes</th>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" </thead>\n",
|
||||||
|
" <tbody>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>0</th>\n",
|
||||||
|
" <td>GEOID10_TRACT</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>Census tract ID</td>\n",
|
||||||
|
" <td>Census tract ID</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>1</th>\n",
|
||||||
|
" <td>County Name</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>County Name</td>\n",
|
||||||
|
" <td>County Name</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>2</th>\n",
|
||||||
|
" <td>State/Territory</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>State/Territory</td>\n",
|
||||||
|
" <td>State/Territory</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>3</th>\n",
|
||||||
|
" <td>Total threshold criteria exceeded</td>\n",
|
||||||
|
" <td>int64</td>\n",
|
||||||
|
" <td>Total threshold criteria exceeded</td>\n",
|
||||||
|
" <td>Total threshold criteria exceeded</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>Lists out the total number of criteria (where ...</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>4</th>\n",
|
||||||
|
" <td>Definition M (communities)</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>Identified as disadvantaged</td>\n",
|
||||||
|
" <td>Identified as disadvantaged</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>True / False variable for whether a tract is a...</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>...</th>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>77</th>\n",
|
||||||
|
" <td>Percentage households below 100% of federal po...</td>\n",
|
||||||
|
" <td>percentage</td>\n",
|
||||||
|
" <td>Percentage households below 100% of federal po...</td>\n",
|
||||||
|
" <td>Percentage households below 100% of federal po...</td>\n",
|
||||||
|
" <td>Because not all data is available for the Nati...</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>78</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Because not all data is available for the Nati...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>79</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Because not all data is available for the Nati...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>80</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>Because not all data is available for the Nati...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>81</th>\n",
|
||||||
|
" <td>Percent of population not currently enrolled i...</td>\n",
|
||||||
|
" <td>percentage</td>\n",
|
||||||
|
" <td>Percent of residents who are not currently enr...</td>\n",
|
||||||
|
" <td>Percent of residents who are not currently enr...</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" </tbody>\n",
|
||||||
|
"</table>\n",
|
||||||
|
"<p>82 rows × 7 columns</p>\n",
|
||||||
|
"</div>"
|
||||||
|
],
|
||||||
|
"text/plain": [
|
||||||
|
" score_name csv_field_type \\\n",
|
||||||
|
"0 GEOID10_TRACT string \n",
|
||||||
|
"1 County Name string \n",
|
||||||
|
"2 State/Territory string \n",
|
||||||
|
"3 Total threshold criteria exceeded int64 \n",
|
||||||
|
"4 Definition M (communities) bool \n",
|
||||||
|
".. ... ... \n",
|
||||||
|
"77 Percentage households below 100% of federal po... percentage \n",
|
||||||
|
"78 Greater than or equal to the 90th percentile f... bool \n",
|
||||||
|
"79 Greater than or equal to the 90th percentile f... bool \n",
|
||||||
|
"80 Greater than or equal to the 90th percentile f... bool \n",
|
||||||
|
"81 Percent of population not currently enrolled i... percentage \n",
|
||||||
|
"\n",
|
||||||
|
" csv_label \\\n",
|
||||||
|
"0 Census tract ID \n",
|
||||||
|
"1 County Name \n",
|
||||||
|
"2 State/Territory \n",
|
||||||
|
"3 Total threshold criteria exceeded \n",
|
||||||
|
"4 Identified as disadvantaged \n",
|
||||||
|
".. ... \n",
|
||||||
|
"77 Percentage households below 100% of federal po... \n",
|
||||||
|
"78 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"79 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"80 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"81 Percent of residents who are not currently enr... \n",
|
||||||
|
"\n",
|
||||||
|
" excel_label \\\n",
|
||||||
|
"0 Census tract ID \n",
|
||||||
|
"1 County Name \n",
|
||||||
|
"2 State/Territory \n",
|
||||||
|
"3 Total threshold criteria exceeded \n",
|
||||||
|
"4 Identified as disadvantaged \n",
|
||||||
|
".. ... \n",
|
||||||
|
"77 Percentage households below 100% of federal po... \n",
|
||||||
|
"78 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"79 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"80 Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"81 Percent of residents who are not currently enr... \n",
|
||||||
|
"\n",
|
||||||
|
" calculation_notes \\\n",
|
||||||
|
"0 NaN \n",
|
||||||
|
"1 NaN \n",
|
||||||
|
"2 NaN \n",
|
||||||
|
"3 NaN \n",
|
||||||
|
"4 NaN \n",
|
||||||
|
".. ... \n",
|
||||||
|
"77 Because not all data is available for the Nati... \n",
|
||||||
|
"78 Because not all data is available for the Nati... \n",
|
||||||
|
"79 Because not all data is available for the Nati... \n",
|
||||||
|
"80 Because not all data is available for the Nati... \n",
|
||||||
|
"81 NaN \n",
|
||||||
|
"\n",
|
||||||
|
" threshold_category \\\n",
|
||||||
|
"0 NaN \n",
|
||||||
|
"1 NaN \n",
|
||||||
|
"2 NaN \n",
|
||||||
|
"3 NaN \n",
|
||||||
|
"4 NaN \n",
|
||||||
|
".. ... \n",
|
||||||
|
"77 NaN \n",
|
||||||
|
"78 training and workforce development \n",
|
||||||
|
"79 training and workforce development \n",
|
||||||
|
"80 training and workforce development \n",
|
||||||
|
"81 NaN \n",
|
||||||
|
"\n",
|
||||||
|
" notes \n",
|
||||||
|
"0 NaN \n",
|
||||||
|
"1 NaN \n",
|
||||||
|
"2 NaN \n",
|
||||||
|
"3 Lists out the total number of criteria (where ... \n",
|
||||||
|
"4 True / False variable for whether a tract is a... \n",
|
||||||
|
".. ... \n",
|
||||||
|
"77 NaN \n",
|
||||||
|
"78 NaN \n",
|
||||||
|
"79 NaN \n",
|
||||||
|
"80 NaN \n",
|
||||||
|
"81 NaN \n",
|
||||||
|
"\n",
|
||||||
|
"[82 rows x 7 columns]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 3,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"check"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 28,
|
||||||
|
"id": "d86c867a-1a55-4ec0-82a6-040841406236",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"codebook = pd.DataFrame(to_frame_dict)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 62,
|
||||||
|
"id": "6215deaf-b004-4da0-a70b-bc54f636601a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"details_to_merge = pd.DataFrame(mapping_dictionary)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 69,
|
||||||
|
"id": "ac4e65c2-09e6-4978-9440-37b3be057f65",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"shapefile_codes = pd.read_csv(\n",
|
||||||
|
" \"/Users/emmausds/j40/data_pipeline/data/score/shapefile/columns.csv\"\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 153,
|
||||||
|
"id": "31cfd9ec-5f5f-4642-a51f-6875c2c279a4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"['Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)',\n",
|
||||||
|
" 'Expected building loss rate (Natural Hazards Risk Index) (percentile)',\n",
|
||||||
|
" 'Expected population loss rate (Natural Hazards Risk Index) (percentile)',\n",
|
||||||
|
" 'Energy burden (percentile)',\n",
|
||||||
|
" 'PM2.5 in the air (percentile)',\n",
|
||||||
|
" 'Diesel particulate matter exposure (percentile)',\n",
|
||||||
|
" 'Traffic proximity and volume (percentile)',\n",
|
||||||
|
" 'Housing burden (percent) (percentile)',\n",
|
||||||
|
" 'Percent pre-1960s housing (lead paint indicator) (percentile)',\n",
|
||||||
|
" 'Median value ($) of owner-occupied housing units (percentile)',\n",
|
||||||
|
" 'Proximity to hazardous waste sites (percentile)',\n",
|
||||||
|
" 'Proximity to NPL sites (percentile)',\n",
|
||||||
|
" 'Proximity to Risk Management Plan (RMP) facilities (percentile)',\n",
|
||||||
|
" 'Wastewater discharge (percentile)',\n",
|
||||||
|
" 'Current asthma among adults aged greater than or equal to 18 years (percentile)',\n",
|
||||||
|
" 'Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)',\n",
|
||||||
|
" 'Coronary heart disease among adults aged greater than or equal to 18 years (percentile)',\n",
|
||||||
|
" 'Low life expectancy (percentile)',\n",
|
||||||
|
" 'Low median household income as a percent of area median income (percentile)',\n",
|
||||||
|
" 'Linguistic isolation (percent) (percentile)',\n",
|
||||||
|
" 'Unemployment (percent) (percentile)',\n",
|
||||||
|
" 'Percent of individuals below 200% Federal Poverty Line (percentile)',\n",
|
||||||
|
" 'Percent of individuals < 100% Federal Poverty Line (percentile)',\n",
|
||||||
|
" 'Percent individuals age 25 or over with less than high school degree (percentile)',\n",
|
||||||
|
" 'Definition M (percentile)',\n",
|
||||||
|
" 'Low median household income as a percent of territory median income in 2009 (percentile)',\n",
|
||||||
|
" 'Percentage households below 100% of federal poverty line in 2009 for island areas (percentile)',\n",
|
||||||
|
" 'Unemployment (percent) in 2009 for island areas (percentile)']"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 153,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 154,
|
||||||
|
"id": "66dde4fc-48e6-4bdf-b3a6-16c766e94d8a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
" - column_name: Expected agricultural loss rate (Natural Hazards Risk Index) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Expected building loss rate (Natural Hazards Risk Index) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Expected population loss rate (Natural Hazards Risk Index) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Energy burden (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: PM2.5 in the air (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Diesel particulate matter exposure (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Traffic proximity and volume (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Housing burden (percent) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Percent pre-1960s housing (lead paint indicator) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Median value ($) of owner-occupied housing units (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Proximity to hazardous waste sites (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Proximity to NPL sites (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Proximity to Risk Management Plan (RMP) facilities (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Wastewater discharge (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Current asthma among adults aged greater than or equal to 18 years (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Diagnosed diabetes among adults aged greater than or equal to 18 years (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Coronary heart disease among adults aged greater than or equal to 18 years (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Low life expectancy (percentile)\n",
|
||||||
|
" notes: (1) this percentile is reversed, meaning the lowest raw numbers become the highest percentiles, and (2) all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Low median household income as a percent of area median income (percentile)\n",
|
||||||
|
" notes: (1) this percentile is reversed, meaning the lowest raw numbers become the highest percentiles, and (2) all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Linguistic isolation (percent) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Unemployment (percent) (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Percent of individuals below 200% Federal Poverty Line (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Percent of individuals < 100% Federal Poverty Line (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Percent individuals age 25 or over with less than high school degree (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Definition M (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Low median household income as a percent of territory median income in 2009 (percentile)\n",
|
||||||
|
" notes: (1) this percentile is reversed, meaning the lowest raw numbers become the highest percentiles, and (2) all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Percentage households below 100% of federal poverty line in 2009 for island areas (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n",
|
||||||
|
" - column_name: Unemployment (percent) in 2009 for island areas (percentile)\n",
|
||||||
|
" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"for col in [\n",
|
||||||
|
" col for col in download_codebook.index.to_list() if \"(percentile)\" in col\n",
|
||||||
|
"]:\n",
|
||||||
|
" print(f\" - column_name: {col}\")\n",
|
||||||
|
" if \"Low\" not in col:\n",
|
||||||
|
" print(\n",
|
||||||
|
" f\" notes: all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\"\n",
|
||||||
|
" )\n",
|
||||||
|
" else:\n",
|
||||||
|
" print(\n",
|
||||||
|
" f\" notes: (1) this percentile is reversed, meaning the lowest raw numbers become the highest percentiles, and (2) all percentiles are floored (rounded down to the nearest percentile). For example, 89.7th percentile is rounded down to 89 for this field.\"\n",
|
||||||
|
" )"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 143,
|
||||||
|
"id": "5c08708e-4ebf-4cfe-8efb-7ee6c7930427",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/html": [
|
||||||
|
"<div>\n",
|
||||||
|
"<style scoped>\n",
|
||||||
|
" .dataframe tbody tr th:only-of-type {\n",
|
||||||
|
" vertical-align: middle;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe tbody tr th {\n",
|
||||||
|
" vertical-align: top;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe thead th {\n",
|
||||||
|
" text-align: right;\n",
|
||||||
|
" }\n",
|
||||||
|
"</style>\n",
|
||||||
|
"<table border=\"1\" class=\"dataframe\">\n",
|
||||||
|
" <thead>\n",
|
||||||
|
" <tr style=\"text-align: right;\">\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th>excel_label</th>\n",
|
||||||
|
" <th>format</th>\n",
|
||||||
|
" <th>shapefile_column</th>\n",
|
||||||
|
" <th>notes</th>\n",
|
||||||
|
" <th>category</th>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>score_name</th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" </thead>\n",
|
||||||
|
" <tbody>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>GEOID10_TRACT</th>\n",
|
||||||
|
" <td>Census tract ID</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>County Name</th>\n",
|
||||||
|
" <td>County Name</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>CF</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>State/Territory</th>\n",
|
||||||
|
" <td>State/Territory</td>\n",
|
||||||
|
" <td>string</td>\n",
|
||||||
|
" <td>SF</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Total threshold criteria exceeded</th>\n",
|
||||||
|
" <td>Total threshold criteria exceeded</td>\n",
|
||||||
|
" <td>int64</td>\n",
|
||||||
|
" <td>TC</td>\n",
|
||||||
|
" <td>Lists out the total number of criteria (where ...</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Definition M (communities)</th>\n",
|
||||||
|
" <td>Identified as disadvantaged</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>SM_C</td>\n",
|
||||||
|
" <td>True / False variable for whether a tract is a...</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>...</th>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" <td>...</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Unemployment (percent) in 2009 (island areas) and 2010 (states and PR)</th>\n",
|
||||||
|
" <td>Unemployment (percent) in 2009 (island areas) ...</td>\n",
|
||||||
|
" <td>percentage</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Percentage households below 100% of federal poverty line in 2009 (island areas) and 2010 (states and PR)</th>\n",
|
||||||
|
" <td>Percentage households below 100% of federal po...</td>\n",
|
||||||
|
" <td>percentage</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" <td>NaN</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Greater than or equal to the 90th percentile for unemployment and has low HS education in 2009 (island areas)?</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>IAULHSE</td>\n",
|
||||||
|
" <td>island area information comes from the dicenni...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Greater than or equal to the 90th percentile for households at or below 100% federal poverty level and has low HS education in 2009 (island areas)?</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>IAPLHSE</td>\n",
|
||||||
|
" <td>island area information comes from the dicenni...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>Greater than or equal to the 90th percentile for low median household income as a percent of area median income and has low HS education in 2009 (island areas)?</th>\n",
|
||||||
|
" <td>Greater than or equal to the 90th percentile f...</td>\n",
|
||||||
|
" <td>bool</td>\n",
|
||||||
|
" <td>IALMILHSE</td>\n",
|
||||||
|
" <td>island area information comes from the dicenni...</td>\n",
|
||||||
|
" <td>training and workforce development</td>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" </tbody>\n",
|
||||||
|
"</table>\n",
|
||||||
|
"<p>82 rows × 5 columns</p>\n",
|
||||||
|
"</div>"
|
||||||
|
],
|
||||||
|
"text/plain": [
|
||||||
|
" excel_label \\\n",
|
||||||
|
"score_name \n",
|
||||||
|
"GEOID10_TRACT Census tract ID \n",
|
||||||
|
"County Name County Name \n",
|
||||||
|
"State/Territory State/Territory \n",
|
||||||
|
"Total threshold criteria exceeded Total threshold criteria exceeded \n",
|
||||||
|
"Definition M (communities) Identified as disadvantaged \n",
|
||||||
|
"... ... \n",
|
||||||
|
"Unemployment (percent) in 2009 (island areas) a... Unemployment (percent) in 2009 (island areas) ... \n",
|
||||||
|
"Percentage households below 100% of federal pov... Percentage households below 100% of federal po... \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... Greater than or equal to the 90th percentile f... \n",
|
||||||
|
"\n",
|
||||||
|
" format \\\n",
|
||||||
|
"score_name \n",
|
||||||
|
"GEOID10_TRACT string \n",
|
||||||
|
"County Name string \n",
|
||||||
|
"State/Territory string \n",
|
||||||
|
"Total threshold criteria exceeded int64 \n",
|
||||||
|
"Definition M (communities) bool \n",
|
||||||
|
"... ... \n",
|
||||||
|
"Unemployment (percent) in 2009 (island areas) a... percentage \n",
|
||||||
|
"Percentage households below 100% of federal pov... percentage \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... bool \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... bool \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... bool \n",
|
||||||
|
"\n",
|
||||||
|
" shapefile_column \\\n",
|
||||||
|
"score_name \n",
|
||||||
|
"GEOID10_TRACT NaN \n",
|
||||||
|
"County Name CF \n",
|
||||||
|
"State/Territory SF \n",
|
||||||
|
"Total threshold criteria exceeded TC \n",
|
||||||
|
"Definition M (communities) SM_C \n",
|
||||||
|
"... ... \n",
|
||||||
|
"Unemployment (percent) in 2009 (island areas) a... NaN \n",
|
||||||
|
"Percentage households below 100% of federal pov... NaN \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... IAULHSE \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... IAPLHSE \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... IALMILHSE \n",
|
||||||
|
"\n",
|
||||||
|
" notes \\\n",
|
||||||
|
"score_name \n",
|
||||||
|
"GEOID10_TRACT NaN \n",
|
||||||
|
"County Name NaN \n",
|
||||||
|
"State/Territory NaN \n",
|
||||||
|
"Total threshold criteria exceeded Lists out the total number of criteria (where ... \n",
|
||||||
|
"Definition M (communities) True / False variable for whether a tract is a... \n",
|
||||||
|
"... ... \n",
|
||||||
|
"Unemployment (percent) in 2009 (island areas) a... NaN \n",
|
||||||
|
"Percentage households below 100% of federal pov... NaN \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... island area information comes from the dicenni... \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... island area information comes from the dicenni... \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... island area information comes from the dicenni... \n",
|
||||||
|
"\n",
|
||||||
|
" category \n",
|
||||||
|
"score_name \n",
|
||||||
|
"GEOID10_TRACT NaN \n",
|
||||||
|
"County Name NaN \n",
|
||||||
|
"State/Territory NaN \n",
|
||||||
|
"Total threshold criteria exceeded NaN \n",
|
||||||
|
"Definition M (communities) NaN \n",
|
||||||
|
"... ... \n",
|
||||||
|
"Unemployment (percent) in 2009 (island areas) a... NaN \n",
|
||||||
|
"Percentage households below 100% of federal pov... NaN \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... training and workforce development \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... training and workforce development \n",
|
||||||
|
"Greater than or equal to the 90th percentile fo... training and workforce development \n",
|
||||||
|
"\n",
|
||||||
|
"[82 rows x 5 columns]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 143,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"download_codebook.dropna(subset=[\"format\"]).reset_index()[\"score_name\"]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 137,
|
||||||
|
"id": "7139ce5d-db5e-49dd-8bb3-122c7b73b395",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/html": [
|
||||||
|
"<div>\n",
|
||||||
|
"<style scoped>\n",
|
||||||
|
" .dataframe tbody tr th:only-of-type {\n",
|
||||||
|
" vertical-align: middle;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe tbody tr th {\n",
|
||||||
|
" vertical-align: top;\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
" .dataframe thead th {\n",
|
||||||
|
" text-align: right;\n",
|
||||||
|
" }\n",
|
||||||
|
"</style>\n",
|
||||||
|
"<table border=\"1\" class=\"dataframe\">\n",
|
||||||
|
" <thead>\n",
|
||||||
|
" <tr style=\"text-align: right;\">\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th>excel_label</th>\n",
|
||||||
|
" <th>format</th>\n",
|
||||||
|
" <th>shapefile_column</th>\n",
|
||||||
|
" <th>notes</th>\n",
|
||||||
|
" <th>category</th>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" <tr>\n",
|
||||||
|
" <th>score_name</th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" <th></th>\n",
|
||||||
|
" </tr>\n",
|
||||||
|
" </thead>\n",
|
||||||
|
" <tbody>\n",
|
||||||
|
" </tbody>\n",
|
||||||
|
"</table>\n",
|
||||||
|
"</div>"
|
||||||
|
],
|
||||||
|
"text/plain": [
|
||||||
|
"Empty DataFrame\n",
|
||||||
|
"Columns: [excel_label, format, shapefile_column, notes, category]\n",
|
||||||
|
"Index: []"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 137,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"download_codebook.loc[\n",
|
||||||
|
" sum([download_codebook[col] == \"percentile\" for col in [\"format\"]]) > 0\n",
|
||||||
|
"]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 134,
|
||||||
|
"id": "e31ef01c-b225-48f0-bdf5-1efb8d4ed95c",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"ename": "ValueError",
|
||||||
|
"evalue": "Cannot index with multidimensional key",
|
||||||
|
"output_type": "error",
|
||||||
|
"traceback": [
|
||||||
|
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||||
|
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
|
||||||
|
"Input \u001b[0;32mIn [134]\u001b[0m, in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mdownload_codebook\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mloc\u001b[49m\u001b[43m[\u001b[49m\u001b[43mdownload_codebook\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfilter\u001b[49m\u001b[43m(\u001b[49m\u001b[43mlike\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mformat\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m==\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mpercentile\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m]\u001b[49m\n",
|
||||||
|
"File \u001b[0;32m/usr/local/lib/python3.9/site-packages/pandas/core/indexing.py:931\u001b[0m, in \u001b[0;36m_LocationIndexer.__getitem__\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 928\u001b[0m axis \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39maxis \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;241m0\u001b[39m\n\u001b[1;32m 930\u001b[0m maybe_callable \u001b[38;5;241m=\u001b[39m com\u001b[38;5;241m.\u001b[39mapply_if_callable(key, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj)\n\u001b[0;32m--> 931\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_getitem_axis\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmaybe_callable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43maxis\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43maxis\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||||
|
"File \u001b[0;32m/usr/local/lib/python3.9/site-packages/pandas/core/indexing.py:1151\u001b[0m, in \u001b[0;36m_LocIndexer._getitem_axis\u001b[0;34m(self, key, axis)\u001b[0m\n\u001b[1;32m 1148\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28misinstance\u001b[39m(key, \u001b[38;5;28mtuple\u001b[39m) \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(labels, MultiIndex)):\n\u001b[1;32m 1150\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mhasattr\u001b[39m(key, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mndim\u001b[39m\u001b[38;5;124m\"\u001b[39m) \u001b[38;5;129;01mand\u001b[39;00m key\u001b[38;5;241m.\u001b[39mndim \u001b[38;5;241m>\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[0;32m-> 1151\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCannot index with multidimensional key\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 1153\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_getitem_iterable(key, axis\u001b[38;5;241m=\u001b[39maxis)\n\u001b[1;32m 1155\u001b[0m \u001b[38;5;66;03m# nested tuple slicing\u001b[39;00m\n",
|
||||||
|
"\u001b[0;31mValueError\u001b[0m: Cannot index with multidimensional key"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"download_codebook.loc[download_codebook.filter(like=\"format\") == \"percentile\"]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 131,
|
||||||
|
"id": "73268de4-3378-4ac7-bf85-f483a78c3966",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"download_codebook = pd.concat(\n",
|
||||||
|
" [\n",
|
||||||
|
" codebook.set_index(\"score_name\"),\n",
|
||||||
|
" shapefile_codes.rename(\n",
|
||||||
|
" columns={\"meaning\": \"shapefile_column\", \"column\": \"score_name\"}\n",
|
||||||
|
" ).set_index(\"score_name\"),\n",
|
||||||
|
" details_to_merge.set_index(\"score_name\"),\n",
|
||||||
|
" ],\n",
|
||||||
|
" axis=1,\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6321ed42-aee6-40fc-8bf8-2a4ce4276eca",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.9.10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
|
@ -1,6 +1,8 @@
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass
|
||||||
|
from dataclasses import field
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import List, Optional
|
from typing import List
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
class FieldType(Enum):
|
class FieldType(Enum):
|
||||||
|
|
|
@ -5,15 +5,15 @@ import typing
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.score.schemas.datasets import DatasetsConfig
|
from data_pipeline.etl.score.etl_utils import (
|
||||||
from data_pipeline.utils import (
|
compare_to_list_of_expected_state_fips_codes,
|
||||||
load_yaml_dict_from_file,
|
|
||||||
unzip_file_from_url,
|
|
||||||
remove_all_from_dir,
|
|
||||||
get_module_logger,
|
|
||||||
)
|
)
|
||||||
|
from data_pipeline.etl.score.schemas.datasets import DatasetsConfig
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import load_yaml_dict_from_file
|
||||||
|
from data_pipeline.utils import remove_all_from_dir
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -43,10 +43,11 @@ class ExtractTransformLoad:
|
||||||
APP_ROOT: pathlib.Path = settings.APP_ROOT
|
APP_ROOT: pathlib.Path = settings.APP_ROOT
|
||||||
|
|
||||||
# Directories
|
# Directories
|
||||||
DATA_PATH: pathlib.Path = APP_ROOT / "data"
|
DATA_PATH: pathlib.Path = settings.DATA_PATH
|
||||||
TMP_PATH: pathlib.Path = DATA_PATH / "tmp"
|
TMP_PATH: pathlib.Path = DATA_PATH / "tmp"
|
||||||
CONTENT_CONFIG: pathlib.Path = APP_ROOT / "content" / "config"
|
CONTENT_CONFIG: pathlib.Path = APP_ROOT / "content" / "config"
|
||||||
DATASET_CONFIG: pathlib.Path = APP_ROOT / "etl" / "score" / "config"
|
DATASET_CONFIG_PATH: pathlib.Path = APP_ROOT / "etl" / "score" / "config"
|
||||||
|
DATASET_CONFIG: Optional[dict] = None
|
||||||
|
|
||||||
# Parameters
|
# Parameters
|
||||||
GEOID_FIELD_NAME: str = "GEOID10"
|
GEOID_FIELD_NAME: str = "GEOID10"
|
||||||
|
@ -81,6 +82,23 @@ class ExtractTransformLoad:
|
||||||
# NULL_REPRESENTATION is how nulls are represented on the input field
|
# NULL_REPRESENTATION is how nulls are represented on the input field
|
||||||
NULL_REPRESENTATION: str = None
|
NULL_REPRESENTATION: str = None
|
||||||
|
|
||||||
|
# Whether this ETL contains data for the continental nation (DC & the US states
|
||||||
|
# except for Alaska and Hawaii)
|
||||||
|
CONTINENTAL_US_EXPECTED_IN_DATA: bool = True
|
||||||
|
|
||||||
|
# Whether this ETL contains data for Alaska and Hawaii
|
||||||
|
ALASKA_AND_HAWAII_EXPECTED_IN_DATA: bool = True
|
||||||
|
|
||||||
|
# Whether this ETL contains data for Puerto Rico
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA: bool = True
|
||||||
|
|
||||||
|
# Whether this ETL contains data for the island areas
|
||||||
|
ISLAND_AREAS_EXPECTED_IN_DATA: bool = False
|
||||||
|
|
||||||
|
# Whether this ETL contains known missing data for any additional
|
||||||
|
# states/territories
|
||||||
|
EXPECTED_MISSING_STATES: typing.List[str] = []
|
||||||
|
|
||||||
# Thirteen digits in a census block group ID.
|
# Thirteen digits in a census block group ID.
|
||||||
EXPECTED_CENSUS_BLOCK_GROUPS_CHARACTER_LENGTH: int = 13
|
EXPECTED_CENSUS_BLOCK_GROUPS_CHARACTER_LENGTH: int = 13
|
||||||
# TODO: investigate. Census says there are only 217,740 CBGs in the US. This might
|
# TODO: investigate. Census says there are only 217,740 CBGs in the US. This might
|
||||||
|
@ -94,17 +112,24 @@ class ExtractTransformLoad:
|
||||||
# periods. https://github.com/usds/justice40-tool/issues/964
|
# periods. https://github.com/usds/justice40-tool/issues/964
|
||||||
EXPECTED_MAX_CENSUS_TRACTS: int = 74160
|
EXPECTED_MAX_CENSUS_TRACTS: int = 74160
|
||||||
|
|
||||||
|
# Should this dataset load its configuration from
|
||||||
|
# the YAML files?
|
||||||
|
LOAD_YAML_CONFIG: bool = False
|
||||||
|
|
||||||
# We use output_df as the final dataframe to use to write to the CSV
|
# We use output_df as the final dataframe to use to write to the CSV
|
||||||
# It is used on the "load" base class method
|
# It is used on the "load" base class method
|
||||||
output_df: pd.DataFrame = None
|
output_df: pd.DataFrame = None
|
||||||
|
|
||||||
|
def __init_subclass__(cls) -> None:
|
||||||
|
if cls.LOAD_YAML_CONFIG:
|
||||||
|
cls.DATASET_CONFIG = cls.yaml_config_load()
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def yaml_config_load(cls) -> dict:
|
def yaml_config_load(cls) -> dict:
|
||||||
"""Generate config dictionary and set instance variables from YAML dataset."""
|
"""Generate config dictionary and set instance variables from YAML dataset."""
|
||||||
|
|
||||||
# check if the class instance has score YAML definitions
|
# check if the class instance has score YAML definitions
|
||||||
datasets_config = load_yaml_dict_from_file(
|
datasets_config = load_yaml_dict_from_file(
|
||||||
cls.DATASET_CONFIG / "datasets.yml",
|
cls.DATASET_CONFIG_PATH / "datasets.yml",
|
||||||
DatasetsConfig,
|
DatasetsConfig,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -123,9 +148,10 @@ class ExtractTransformLoad:
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
|
||||||
# set some of the basic fields
|
# set some of the basic fields
|
||||||
cls.INPUT_GEOID_TRACT_FIELD_NAME = dataset_config[
|
if "input_geoid_tract_field_name" in dataset_config:
|
||||||
"input_geoid_tract_field_name"
|
cls.INPUT_GEOID_TRACT_FIELD_NAME = dataset_config[
|
||||||
]
|
"input_geoid_tract_field_name"
|
||||||
|
]
|
||||||
|
|
||||||
# get the columns to write on the CSV
|
# get the columns to write on the CSV
|
||||||
# and set the constants
|
# and set the constants
|
||||||
|
@ -134,11 +160,7 @@ class ExtractTransformLoad:
|
||||||
]
|
]
|
||||||
for field in dataset_config["load_fields"]:
|
for field in dataset_config["load_fields"]:
|
||||||
cls.COLUMNS_TO_KEEP.append(field["long_name"])
|
cls.COLUMNS_TO_KEEP.append(field["long_name"])
|
||||||
|
|
||||||
# set the constants for the class
|
|
||||||
setattr(cls, field["df_field_name"], field["long_name"])
|
setattr(cls, field["df_field_name"], field["long_name"])
|
||||||
|
|
||||||
# return the config dict
|
|
||||||
return dataset_config
|
return dataset_config
|
||||||
|
|
||||||
# This is a classmethod so it can be used by `get_data_frame` without
|
# This is a classmethod so it can be used by `get_data_frame` without
|
||||||
|
@ -176,14 +198,18 @@ class ExtractTransformLoad:
|
||||||
to get the file from a source url, unzips it and stores it on an
|
to get the file from a source url, unzips it and stores it on an
|
||||||
extract_path."""
|
extract_path."""
|
||||||
|
|
||||||
# this can be accessed via super().extract()
|
if source_url is None:
|
||||||
if source_url and extract_path:
|
source_url = self.SOURCE_URL
|
||||||
unzip_file_from_url(
|
|
||||||
file_url=source_url,
|
if extract_path is None:
|
||||||
download_path=self.get_tmp_path(),
|
extract_path = self.get_tmp_path()
|
||||||
unzipped_file_path=extract_path,
|
|
||||||
verify=verify,
|
unzip_file_from_url(
|
||||||
)
|
file_url=source_url,
|
||||||
|
download_path=self.get_tmp_path(),
|
||||||
|
unzipped_file_path=extract_path,
|
||||||
|
verify=verify,
|
||||||
|
)
|
||||||
|
|
||||||
def transform(self) -> None:
|
def transform(self) -> None:
|
||||||
"""Transform the data extracted into a format that can be consumed by the
|
"""Transform the data extracted into a format that can be consumed by the
|
||||||
|
@ -280,6 +306,24 @@ class ExtractTransformLoad:
|
||||||
f"`{geo_field}`."
|
f"`{geo_field}`."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Check whether data contains expected states
|
||||||
|
states_in_output_df = (
|
||||||
|
self.output_df[self.GEOID_TRACT_FIELD_NAME]
|
||||||
|
.str[0:2]
|
||||||
|
.unique()
|
||||||
|
.tolist()
|
||||||
|
)
|
||||||
|
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=states_in_output_df,
|
||||||
|
continental_us_expected=self.CONTINENTAL_US_EXPECTED_IN_DATA,
|
||||||
|
alaska_and_hawaii_expected=self.ALASKA_AND_HAWAII_EXPECTED_IN_DATA,
|
||||||
|
puerto_rico_expected=self.PUERTO_RICO_EXPECTED_IN_DATA,
|
||||||
|
island_areas_expected=self.ISLAND_AREAS_EXPECTED_IN_DATA,
|
||||||
|
additional_fips_codes_not_expected=self.EXPECTED_MISSING_STATES,
|
||||||
|
dataset_name=self.NAME,
|
||||||
|
)
|
||||||
|
|
||||||
def load(self, float_format=None) -> None:
|
def load(self, float_format=None) -> None:
|
||||||
"""Saves the transformed data.
|
"""Saves the transformed data.
|
||||||
|
|
||||||
|
@ -318,6 +362,9 @@ class ExtractTransformLoad:
|
||||||
f"No file found at `{output_file_path}`."
|
f"No file found at `{output_file_path}`."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Reading in CSV `{output_file_path}` for ETL of class `{cls}`."
|
||||||
|
)
|
||||||
output_df = pd.read_csv(
|
output_df = pd.read_csv(
|
||||||
output_file_path,
|
output_file_path,
|
||||||
dtype={
|
dtype={
|
||||||
|
|
|
@ -3,121 +3,188 @@ DATASET_LIST = [
|
||||||
"name": "cdc_places",
|
"name": "cdc_places",
|
||||||
"module_dir": "cdc_places",
|
"module_dir": "cdc_places",
|
||||||
"class_name": "CDCPlacesETL",
|
"class_name": "CDCPlacesETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "national_risk_index",
|
"name": "national_risk_index",
|
||||||
"module_dir": "national_risk_index",
|
"module_dir": "national_risk_index",
|
||||||
"class_name": "NationalRiskIndexETL",
|
"class_name": "NationalRiskIndexETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "travel_composite",
|
||||||
|
"module_dir": "dot_travel_composite",
|
||||||
|
"class_name": "TravelCompositeETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "tree_equity_score",
|
"name": "tree_equity_score",
|
||||||
"module_dir": "tree_equity_score",
|
"module_dir": "tree_equity_score",
|
||||||
"class_name": "TreeEquityScoreETL",
|
"class_name": "TreeEquityScoreETL",
|
||||||
},
|
"is_memory_intensive": False,
|
||||||
{
|
|
||||||
"name": "census_acs",
|
|
||||||
"module_dir": "census_acs",
|
|
||||||
"class_name": "CensusACSETL",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "census_acs_2010",
|
|
||||||
"module_dir": "census_acs_2010",
|
|
||||||
"class_name": "CensusACS2010ETL",
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "census_decennial",
|
"name": "census_decennial",
|
||||||
"module_dir": "census_decennial",
|
"module_dir": "census_decennial",
|
||||||
"class_name": "CensusDecennialETL",
|
"class_name": "CensusDecennialETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "mapping_for_ej",
|
"name": "mapping_for_ej",
|
||||||
"module_dir": "mapping_for_ej",
|
"module_dir": "mapping_for_ej",
|
||||||
"class_name": "MappingForEJETL",
|
"class_name": "MappingForEJETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fsf_flood_risk",
|
||||||
|
"module_dir": "fsf_flood_risk",
|
||||||
|
"class_name": "FloodRiskETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fsf_wildfire_risk",
|
||||||
|
"module_dir": "fsf_wildfire_risk",
|
||||||
|
"class_name": "WildfireRiskETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "ejscreen",
|
"name": "ejscreen",
|
||||||
"module_dir": "ejscreen",
|
"module_dir": "ejscreen",
|
||||||
"class_name": "EJSCREENETL",
|
"class_name": "EJSCREENETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "hud_housing",
|
"name": "hud_housing",
|
||||||
"module_dir": "hud_housing",
|
"module_dir": "hud_housing",
|
||||||
"class_name": "HudHousingETL",
|
"class_name": "HudHousingETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "nlcd_nature_deprived",
|
||||||
|
"module_dir": "nlcd_nature_deprived",
|
||||||
|
"class_name": "NatureDeprivedETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "census_acs_median_income",
|
"name": "census_acs_median_income",
|
||||||
"module_dir": "census_acs_median_income",
|
"module_dir": "census_acs_median_income",
|
||||||
"class_name": "CensusACSMedianIncomeETL",
|
"class_name": "CensusACSMedianIncomeETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "cdc_life_expectancy",
|
"name": "cdc_life_expectancy",
|
||||||
"module_dir": "cdc_life_expectancy",
|
"module_dir": "cdc_life_expectancy",
|
||||||
"class_name": "CDCLifeExpectancy",
|
"class_name": "CDCLifeExpectancy",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "doe_energy_burden",
|
"name": "doe_energy_burden",
|
||||||
"module_dir": "doe_energy_burden",
|
"module_dir": "doe_energy_burden",
|
||||||
"class_name": "DOEEnergyBurden",
|
"class_name": "DOEEnergyBurden",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "geocorr",
|
"name": "geocorr",
|
||||||
"module_dir": "geocorr",
|
"module_dir": "geocorr",
|
||||||
"class_name": "GeoCorrETL",
|
"class_name": "GeoCorrETL",
|
||||||
},
|
"is_memory_intensive": False,
|
||||||
{
|
|
||||||
"name": "child_opportunity_index",
|
|
||||||
"module_dir": "child_opportunity_index",
|
|
||||||
"class_name": "ChildOpportunityIndex",
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "mapping_inequality",
|
"name": "mapping_inequality",
|
||||||
"module_dir": "mapping_inequality",
|
"module_dir": "mapping_inequality",
|
||||||
"class_name": "MappingInequalityETL",
|
"class_name": "MappingInequalityETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "persistent_poverty",
|
"name": "persistent_poverty",
|
||||||
"module_dir": "persistent_poverty",
|
"module_dir": "persistent_poverty",
|
||||||
"class_name": "PersistentPovertyETL",
|
"class_name": "PersistentPovertyETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "ejscreen_areas_of_concern",
|
"name": "ejscreen_areas_of_concern",
|
||||||
"module_dir": "ejscreen_areas_of_concern",
|
"module_dir": "ejscreen_areas_of_concern",
|
||||||
"class_name": "EJSCREENAreasOfConcernETL",
|
"class_name": "EJSCREENAreasOfConcernETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "calenviroscreen",
|
"name": "calenviroscreen",
|
||||||
"module_dir": "calenviroscreen",
|
"module_dir": "calenviroscreen",
|
||||||
"class_name": "CalEnviroScreenETL",
|
"class_name": "CalEnviroScreenETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "hud_recap",
|
"name": "hud_recap",
|
||||||
"module_dir": "hud_recap",
|
"module_dir": "hud_recap",
|
||||||
"class_name": "HudRecapETL",
|
"class_name": "HudRecapETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "epa_rsei",
|
"name": "epa_rsei",
|
||||||
"module_dir": "epa_rsei",
|
"module_dir": "epa_rsei",
|
||||||
"class_name": "EPARiskScreeningEnvironmentalIndicatorsETL",
|
"class_name": "EPARiskScreeningEnvironmentalIndicatorsETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "energy_definition_alternative_draft",
|
"name": "energy_definition_alternative_draft",
|
||||||
"module_dir": "energy_definition_alternative_draft",
|
"module_dir": "energy_definition_alternative_draft",
|
||||||
"class_name": "EnergyDefinitionAlternativeDraft",
|
"class_name": "EnergyDefinitionAlternativeDraft",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "michigan_ejscreen",
|
"name": "michigan_ejscreen",
|
||||||
"module_dir": "michigan_ejscreen",
|
"module_dir": "michigan_ejscreen",
|
||||||
"class_name": "MichiganEnviroScreenETL",
|
"class_name": "MichiganEnviroScreenETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "cdc_svi_index",
|
"name": "cdc_svi_index",
|
||||||
"module_dir": "cdc_svi_index",
|
"module_dir": "cdc_svi_index",
|
||||||
"class_name": "CDCSVIIndex",
|
"class_name": "CDCSVIIndex",
|
||||||
|
"is_memory_intensive": False,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "maryland_ejscreen",
|
"name": "maryland_ejscreen",
|
||||||
"module_dir": "maryland_ejscreen",
|
"module_dir": "maryland_ejscreen",
|
||||||
"class_name": "MarylandEJScreenETL",
|
"class_name": "MarylandEJScreenETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "historic_redlining",
|
||||||
|
"module_dir": "historic_redlining",
|
||||||
|
"class_name": "HistoricRedliningETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
# This has to come after us.json exists
|
||||||
|
{
|
||||||
|
"name": "census_acs",
|
||||||
|
"module_dir": "census_acs",
|
||||||
|
"class_name": "CensusACSETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "census_acs_2010",
|
||||||
|
"module_dir": "census_acs_2010",
|
||||||
|
"class_name": "CensusACS2010ETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "us_army_fuds",
|
||||||
|
"module_dir": "us_army_fuds",
|
||||||
|
"class_name": "USArmyFUDS",
|
||||||
|
"is_memory_intensive": True,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "eamlis",
|
||||||
|
"module_dir": "eamlis",
|
||||||
|
"class_name": "AbandonedMineETL",
|
||||||
|
"is_memory_intensive": True,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "tribal_overlap",
|
||||||
|
"module_dir": "tribal_overlap",
|
||||||
|
"class_name": "TribalOverlapETL",
|
||||||
|
"is_memory_intensive": True,
|
||||||
},
|
},
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -125,10 +192,12 @@ CENSUS_INFO = {
|
||||||
"name": "census",
|
"name": "census",
|
||||||
"module_dir": "census",
|
"module_dir": "census",
|
||||||
"class_name": "CensusETL",
|
"class_name": "CensusETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
}
|
}
|
||||||
|
|
||||||
TRIBAL_INFO = {
|
TRIBAL_INFO = {
|
||||||
"name": "tribal",
|
"name": "tribal",
|
||||||
"module_dir": "tribal",
|
"module_dir": "tribal",
|
||||||
"class_name": "TribalETL",
|
"class_name": "TribalETL",
|
||||||
|
"is_memory_intensive": False,
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
import importlib
|
|
||||||
import concurrent.futures
|
import concurrent.futures
|
||||||
|
import importlib
|
||||||
import typing
|
import typing
|
||||||
|
|
||||||
from data_pipeline.etl.score.etl_score import ScoreETL
|
from data_pipeline.etl.score.etl_score import ScoreETL
|
||||||
|
@ -77,16 +77,41 @@ def etl_runner(dataset_to_run: str = None) -> None:
|
||||||
"""
|
"""
|
||||||
dataset_list = _get_datasets_to_run(dataset_to_run)
|
dataset_list = _get_datasets_to_run(dataset_to_run)
|
||||||
|
|
||||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
# Because we are memory constrained on our infrastructure,
|
||||||
futures = {
|
# we split datasets into those that are not memory intensive
|
||||||
executor.submit(_run_one_dataset, dataset=dataset)
|
# (is_memory_intensive == False) and thereby can be safely
|
||||||
for dataset in dataset_list
|
# run in parallel, and those that require more RAM and thus
|
||||||
}
|
# should be run sequentially. The is_memory_intensive_flag is
|
||||||
|
# set manually in constants.py based on experience running
|
||||||
|
# the pipeline
|
||||||
|
concurrent_datasets = [
|
||||||
|
dataset
|
||||||
|
for dataset in dataset_list
|
||||||
|
if not dataset["is_memory_intensive"]
|
||||||
|
]
|
||||||
|
high_memory_datasets = [
|
||||||
|
dataset for dataset in dataset_list if dataset["is_memory_intensive"]
|
||||||
|
]
|
||||||
|
|
||||||
for fut in concurrent.futures.as_completed(futures):
|
if concurrent_datasets:
|
||||||
# Calling result will raise an exception if one occurred.
|
logger.info("Running concurrent jobs")
|
||||||
# Otherwise, the exceptions are silently ignored.
|
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||||
fut.result()
|
futures = {
|
||||||
|
executor.submit(_run_one_dataset, dataset=dataset)
|
||||||
|
for dataset in concurrent_datasets
|
||||||
|
}
|
||||||
|
|
||||||
|
for fut in concurrent.futures.as_completed(futures):
|
||||||
|
# Calling result will raise an exception if one occurred.
|
||||||
|
# Otherwise, the exceptions are silently ignored.
|
||||||
|
fut.result()
|
||||||
|
|
||||||
|
# Note: these high-memory datasets also usually require the Census geojson to be
|
||||||
|
# generated, and one of them requires the Tribal geojson to be generated.
|
||||||
|
if high_memory_datasets:
|
||||||
|
logger.info("Running high-memory jobs")
|
||||||
|
for dataset in high_memory_datasets:
|
||||||
|
_run_one_dataset(dataset=dataset)
|
||||||
|
|
||||||
|
|
||||||
def score_generate() -> None:
|
def score_generate() -> None:
|
||||||
|
|
|
@ -35,7 +35,6 @@ datasets:
|
||||||
include_in_tiles: true
|
include_in_tiles: true
|
||||||
include_in_downloadable_files: true
|
include_in_downloadable_files: true
|
||||||
create_percentile: true
|
create_percentile: true
|
||||||
|
|
||||||
- short_name: "ex_ag_loss"
|
- short_name: "ex_ag_loss"
|
||||||
df_field_name: "EXPECTED_AGRICULTURE_LOSS_RATE_FIELD_NAME"
|
df_field_name: "EXPECTED_AGRICULTURE_LOSS_RATE_FIELD_NAME"
|
||||||
long_name: "Expected agricultural loss rate (Natural Hazards Risk Index)"
|
long_name: "Expected agricultural loss rate (Natural Hazards Risk Index)"
|
||||||
|
@ -54,7 +53,6 @@ datasets:
|
||||||
include_in_tiles: true
|
include_in_tiles: true
|
||||||
include_in_downloadable_files: true
|
include_in_downloadable_files: true
|
||||||
create_percentile: true
|
create_percentile: true
|
||||||
|
|
||||||
- short_name: "ex_bldg_loss"
|
- short_name: "ex_bldg_loss"
|
||||||
df_field_name: "EXPECTED_BUILDING_LOSS_RATE_FIELD_NAME"
|
df_field_name: "EXPECTED_BUILDING_LOSS_RATE_FIELD_NAME"
|
||||||
long_name: "Expected building loss rate (Natural Hazards Risk Index)"
|
long_name: "Expected building loss rate (Natural Hazards Risk Index)"
|
||||||
|
@ -72,8 +70,262 @@ datasets:
|
||||||
include_in_tiles: true
|
include_in_tiles: true
|
||||||
include_in_downloadable_files: true
|
include_in_downloadable_files: true
|
||||||
create_percentile: true
|
create_percentile: true
|
||||||
|
|
||||||
- short_name: "has_ag_val"
|
- short_name: "has_ag_val"
|
||||||
df_field_name: "CONTAINS_AGRIVALUE"
|
df_field_name: "CONTAINS_AGRIVALUE"
|
||||||
long_name: "Contains agricultural value"
|
long_name: "Contains agricultural value"
|
||||||
field_type: bool
|
field_type: bool
|
||||||
|
- long_name: "Child Opportunity Index 2.0 database"
|
||||||
|
short_name: "coi"
|
||||||
|
module_name: "child_opportunity_index"
|
||||||
|
input_geoid_tract_field_name: "geoid"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "he_heat"
|
||||||
|
df_field_name: "EXTREME_HEAT_FIELD"
|
||||||
|
long_name: "Summer days above 90F"
|
||||||
|
field_type: float
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
include_in_tiles: true
|
||||||
|
- short_name: "he_food"
|
||||||
|
long_name: "Percent low access to healthy food"
|
||||||
|
df_field_name: "HEALTHY_FOOD_FIELD"
|
||||||
|
field_type: float
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
include_in_tiles: true
|
||||||
|
- short_name: "he_green"
|
||||||
|
long_name: "Percent impenetrable surface areas"
|
||||||
|
df_field_name: "IMPENETRABLE_SURFACES_FIELD"
|
||||||
|
field_type: float
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
include_in_tiles: true
|
||||||
|
- short_name: "ed_reading"
|
||||||
|
df_field_name: "READING_FIELD"
|
||||||
|
long_name: "Third grade reading proficiency"
|
||||||
|
field_type: float
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
include_in_tiles: true
|
||||||
|
- long_name: "Low-Income Energy Affordabililty Data"
|
||||||
|
short_name: "LEAD"
|
||||||
|
module_name: "doe_energy_burden"
|
||||||
|
input_geoid_tract_field_name: "FIP"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "EBP_PFS"
|
||||||
|
df_field_name: "REVISED_ENERGY_BURDEN_FIELD_NAME"
|
||||||
|
long_name: "Energy burden"
|
||||||
|
field_type: float
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
include_in_tiles: true
|
||||||
|
- long_name: "Formerly Used Defense Sites"
|
||||||
|
short_name: "FUDS"
|
||||||
|
module_name: "us_army_fuds"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "fuds_count"
|
||||||
|
df_field_name: "ELIGIBLE_FUDS_COUNT_FIELD_NAME"
|
||||||
|
long_name: "Count of eligible Formerly Used Defense Site (FUDS) properties centroids"
|
||||||
|
description_short:
|
||||||
|
"The number of FUDS marked as Eligible and Has Project in the tract."
|
||||||
|
field_type: int64
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: false
|
||||||
|
- short_name: "not_fuds_ct"
|
||||||
|
df_field_name: "INELIGIBLE_FUDS_COUNT_FIELD_NAME"
|
||||||
|
long_name: "Count of ineligible Formerly Used Defense Site (FUDS) properties centroids"
|
||||||
|
description_short:
|
||||||
|
"The number of FUDS marked as Ineligible or Project in the tract."
|
||||||
|
field_type: int64
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: false
|
||||||
|
- short_name: "has_fuds"
|
||||||
|
df_field_name: "ELIGIBLE_FUDS_BINARY_FIELD_NAME"
|
||||||
|
long_name: "Is there at least one Formerly Used Defense Site (FUDS) in the tract?"
|
||||||
|
description_short:
|
||||||
|
"Whether the tract has a FUDS"
|
||||||
|
field_type: bool
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: false
|
||||||
|
- long_name: "Abandoned Mine Land Inventory System"
|
||||||
|
short_name: "eAMLIS"
|
||||||
|
module_name: "eamlis"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "has_aml"
|
||||||
|
df_field_name: "AML_BOOLEAN"
|
||||||
|
long_name: "Is there at least one abandoned mine in this census tract?"
|
||||||
|
description_short:
|
||||||
|
"Whether the tract has an abandoned mine"
|
||||||
|
field_type: bool
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
- long_name: "Example ETL"
|
||||||
|
short_name: "Example"
|
||||||
|
module_name: "example_dataset"
|
||||||
|
input_geoid_tract_field_name: "GEOID10_TRACT"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "EXAMPLE_FIELD"
|
||||||
|
df_field_name: "Input Field 1"
|
||||||
|
long_name: "Example Field 1"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
- long_name: "First Street Foundation Flood Risk"
|
||||||
|
short_name: "FSF Flood Risk"
|
||||||
|
module_name: fsf_flood_risk
|
||||||
|
input_geoid_tract_field_name: "GEOID"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "flood_eligible_properties"
|
||||||
|
df_field_name: "COUNT_PROPERTIES"
|
||||||
|
long_name: "Count of properties eligible for flood risk calculation within tract (floor of 250)"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "flood_risk_properties_today"
|
||||||
|
df_field_name: "PROPERTIES_AT_RISK_FROM_FLOODING_TODAY"
|
||||||
|
long_name: "Count of properties at risk of flood today"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "flood_risk_properties_30yrs"
|
||||||
|
df_field_name: "PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS"
|
||||||
|
long_name: "Count of properties at risk of flood in 30 years"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "flood_risk_share_today"
|
||||||
|
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY"
|
||||||
|
long_name: "Share of properties at risk of flood today"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- short_name: "flood_risk_share_30yrs"
|
||||||
|
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS"
|
||||||
|
long_name: "Share of properties at risk of flood in 30 years"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- long_name: "First Street Foundation Wildfire Risk"
|
||||||
|
short_name: "FSF Wildfire Risk"
|
||||||
|
module_name: fsf_wildfire_risk
|
||||||
|
input_geoid_tract_field_name: "GEOID"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "fire_eligible_properties"
|
||||||
|
df_field_name: "COUNT_PROPERTIES"
|
||||||
|
long_name: "Count of properties eligible for wildfire risk calculation within tract (floor of 250)"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "fire_risk_properties_today"
|
||||||
|
df_field_name: "PROPERTIES_AT_RISK_FROM_FIRE_TODAY"
|
||||||
|
long_name: "Count of properties at risk of wildfire today"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "fire_risk_properties_30yrs"
|
||||||
|
df_field_name: "PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS"
|
||||||
|
long_name: "Count of properties at risk of wildfire in 30 years"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "fire_risk_share_today"
|
||||||
|
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY"
|
||||||
|
long_name: "Share of properties at risk of fire today"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- short_name: "fire_risk_share_30yrs"
|
||||||
|
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS"
|
||||||
|
long_name: "Share of properties at risk of fire in 30 years"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- long_name: "DOT Travel Disadvantage Index"
|
||||||
|
short_name: "DOT"
|
||||||
|
module_name: "travel_composite"
|
||||||
|
input_geoid_tract_field_name: "GEOID10_TRACT"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "travel_burden"
|
||||||
|
df_field_name: "TRAVEL_BURDEN_FIELD_NAME"
|
||||||
|
long_name: "DOT Travel Barriers Score"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- long_name: "National Land Cover Database (NLCD) Lack of Green Space / Nature-Deprived Communities dataset, as compiled by TPL"
|
||||||
|
short_name: "nlcd_nature_deprived"
|
||||||
|
module_name: "nlcd_nature_deprived"
|
||||||
|
input_geoid_tract_field_name: "GEOID10_TRACT"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "ncld_eligible"
|
||||||
|
df_field_name: "ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME"
|
||||||
|
long_name: "Does the tract have at least 35 acres in it?"
|
||||||
|
field_type: bool
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "percent_impervious"
|
||||||
|
df_field_name: "TRACT_PERCENT_IMPERVIOUS_FIELD_NAME"
|
||||||
|
long_name: "Share of the tract's land area that is covered by impervious surface as a percent"
|
||||||
|
field_type: percentage
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- short_name: "percent_nonnatural"
|
||||||
|
df_field_name: "TRACT_PERCENT_NON_NATURAL_FIELD_NAME"
|
||||||
|
long_name: "Share of the tract's land area that is covered by impervious surface or cropland as a percent"
|
||||||
|
field_type: percentage
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- short_name: "percent_cropland"
|
||||||
|
df_field_name: "TRACT_PERCENT_CROPLAND_FIELD_NAME"
|
||||||
|
long_name: "Share of the tract's land area that is covered by cropland as a percent"
|
||||||
|
field_type: percentage
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: true
|
||||||
|
- long_name: "Overlap between Census tract boundaries and Tribal area boundaries."
|
||||||
|
short_name: "tribal_overlap"
|
||||||
|
module_name: "tribal_overlap"
|
||||||
|
input_geoid_tract_field_name: "GEOID10_TRACT"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "tribal_count"
|
||||||
|
df_field_name: "COUNT_OF_TRIBAL_AREAS_IN_TRACT"
|
||||||
|
long_name: "Number of Tribal areas within Census tract"
|
||||||
|
field_type: int64
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
- short_name: "tribal_percent"
|
||||||
|
df_field_name: "PERCENT_OF_TRIBAL_AREA_IN_TRACT"
|
||||||
|
long_name: "Percent of the Census tract that is within Tribal areas"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
number_of_decimals_in_output: 6
|
||||||
|
- short_name: "tribal_names"
|
||||||
|
df_field_name: "NAMES_OF_TRIBAL_AREAS_IN_TRACT"
|
||||||
|
long_name: "Names of Tribal areas within Census tract"
|
||||||
|
field_type: string
|
||||||
|
include_in_tiles: true
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
- long_name: "CDC Life Expeectancy"
|
||||||
|
short_name: "cdc_life_expectancy"
|
||||||
|
module_name: "cdc_life_expectancy"
|
||||||
|
input_geoid_tract_field_name: "Tract ID"
|
||||||
|
load_fields:
|
||||||
|
- short_name: "LLEF"
|
||||||
|
df_field_name: "LIFE_EXPECTANCY_FIELD_NAME"
|
||||||
|
long_name: "Life expectancy (years)"
|
||||||
|
field_type: float
|
||||||
|
include_in_tiles: false
|
||||||
|
include_in_downloadable_files: true
|
||||||
|
create_percentile: false
|
||||||
|
create_reverse_percentile: true
|
||||||
|
|
|
@ -2,9 +2,11 @@ import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
|
||||||
|
## note: to keep map porting "right" fields, keeping descriptors the same.
|
||||||
|
|
||||||
|
|
||||||
# Base Paths
|
# Base Paths
|
||||||
DATA_PATH = Path(settings.APP_ROOT) / "data"
|
DATA_PATH = Path(settings.APP_ROOT) / "data"
|
||||||
TMP_PATH = DATA_PATH / "tmp"
|
TMP_PATH = DATA_PATH / "tmp"
|
||||||
|
@ -115,7 +117,7 @@ ISLAND_AREAS_EXPLANATION = (
|
||||||
CENSUS_COUNTIES_COLUMNS = ["USPS", "GEOID", "NAME"]
|
CENSUS_COUNTIES_COLUMNS = ["USPS", "GEOID", "NAME"]
|
||||||
|
|
||||||
# Drop FIPS codes from map
|
# Drop FIPS codes from map
|
||||||
DROP_FIPS_CODES = ["66", "78"]
|
DROP_FIPS_CODES = []
|
||||||
|
|
||||||
# Drop FIPS codes from incrementing
|
# Drop FIPS codes from incrementing
|
||||||
DROP_FIPS_FROM_NON_WTD_THRESHOLDS = "72"
|
DROP_FIPS_FROM_NON_WTD_THRESHOLDS = "72"
|
||||||
|
@ -138,7 +140,7 @@ TILES_ROUND_NUM_DECIMALS = 2
|
||||||
# Controlling Tile user experience columns
|
# Controlling Tile user experience columns
|
||||||
THRESHOLD_COUNT_TO_SHOW_FIELD_NAME = "THRHLD"
|
THRESHOLD_COUNT_TO_SHOW_FIELD_NAME = "THRHLD"
|
||||||
TILES_ISLAND_AREAS_THRESHOLD_COUNT = 3
|
TILES_ISLAND_AREAS_THRESHOLD_COUNT = 3
|
||||||
TILES_PUERTO_RICO_THRESHOLD_COUNT = 4
|
TILES_PUERTO_RICO_THRESHOLD_COUNT = 10
|
||||||
TILES_NATION_THRESHOLD_COUNT = 21
|
TILES_NATION_THRESHOLD_COUNT = 21
|
||||||
|
|
||||||
# Note that the FIPS code is a string
|
# Note that the FIPS code is a string
|
||||||
|
@ -146,6 +148,58 @@ TILES_NATION_THRESHOLD_COUNT = 21
|
||||||
# 60: American Samoa, 66: Guam, 69: N. Mariana Islands, 78: US Virgin Islands
|
# 60: American Samoa, 66: Guam, 69: N. Mariana Islands, 78: US Virgin Islands
|
||||||
TILES_ISLAND_AREA_FIPS_CODES = ["60", "66", "69", "78"]
|
TILES_ISLAND_AREA_FIPS_CODES = ["60", "66", "69", "78"]
|
||||||
TILES_PUERTO_RICO_FIPS_CODE = ["72"]
|
TILES_PUERTO_RICO_FIPS_CODE = ["72"]
|
||||||
|
TILES_ALASKA_AND_HAWAII_FIPS_CODE = ["02", "15"]
|
||||||
|
TILES_CONTINENTAL_US_FIPS_CODE = [
|
||||||
|
"01",
|
||||||
|
"04",
|
||||||
|
"05",
|
||||||
|
"06",
|
||||||
|
"08",
|
||||||
|
"09",
|
||||||
|
"10",
|
||||||
|
"11",
|
||||||
|
"12",
|
||||||
|
"13",
|
||||||
|
"16",
|
||||||
|
"17",
|
||||||
|
"18",
|
||||||
|
"19",
|
||||||
|
"20",
|
||||||
|
"21",
|
||||||
|
"22",
|
||||||
|
"23",
|
||||||
|
"24",
|
||||||
|
"25",
|
||||||
|
"26",
|
||||||
|
"27",
|
||||||
|
"28",
|
||||||
|
"29",
|
||||||
|
"30",
|
||||||
|
"31",
|
||||||
|
"32",
|
||||||
|
"33",
|
||||||
|
"34",
|
||||||
|
"35",
|
||||||
|
"36",
|
||||||
|
"37",
|
||||||
|
"38",
|
||||||
|
"39",
|
||||||
|
"40",
|
||||||
|
"41",
|
||||||
|
"42",
|
||||||
|
"44",
|
||||||
|
"45",
|
||||||
|
"46",
|
||||||
|
"47",
|
||||||
|
"48",
|
||||||
|
"49",
|
||||||
|
"50",
|
||||||
|
"51",
|
||||||
|
"53",
|
||||||
|
"54",
|
||||||
|
"55",
|
||||||
|
"56",
|
||||||
|
]
|
||||||
|
|
||||||
# Constant to reflect UI Experience version
|
# Constant to reflect UI Experience version
|
||||||
# "Nation" referring to 50 states and DC is from Census
|
# "Nation" referring to 50 states and DC is from Census
|
||||||
|
@ -189,16 +243,17 @@ TILES_SCORE_COLUMNS = {
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "LIF_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "LIF_PFS",
|
||||||
field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD
|
field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "LMI_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "LMI_PFS",
|
||||||
field_names.MEDIAN_HOUSE_VALUE_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "MHVF_PFS",
|
|
||||||
field_names.PM25_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "PM25F_PFS",
|
field_names.PM25_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "PM25F_PFS",
|
||||||
field_names.HIGH_SCHOOL_ED_FIELD: "HSEF",
|
field_names.HIGH_SCHOOL_ED_FIELD: "HSEF",
|
||||||
field_names.POVERTY_LESS_THAN_100_FPL_FIELD
|
field_names.POVERTY_LESS_THAN_100_FPL_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "P100_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "P100_PFS",
|
||||||
field_names.POVERTY_LESS_THAN_200_FPL_FIELD
|
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "P200_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "P200_I_PFS",
|
||||||
|
field_names.FPL_200_SERIES_IMPUTED_AND_ADJUSTED_DONUTS: "AJDLI_ET",
|
||||||
field_names.LEAD_PAINT_FIELD
|
field_names.LEAD_PAINT_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "LPF_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "LPF_PFS",
|
||||||
|
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD
|
||||||
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "KP_PFS",
|
||||||
field_names.NPL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "NPL_PFS",
|
field_names.NPL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "NPL_PFS",
|
||||||
field_names.RMP_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "RMP_PFS",
|
field_names.RMP_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "RMP_PFS",
|
||||||
field_names.TSDF_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "TSDF_PFS",
|
field_names.TSDF_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "TSDF_PFS",
|
||||||
|
@ -208,37 +263,24 @@ TILES_SCORE_COLUMNS = {
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "UF_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "UF_PFS",
|
||||||
field_names.WASTEWATER_FIELD
|
field_names.WASTEWATER_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "WF_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "WF_PFS",
|
||||||
field_names.M_WATER: "M_WTR",
|
field_names.UST_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "UST_PFS",
|
||||||
field_names.M_WORKFORCE: "M_WKFC",
|
field_names.N_WATER: "N_WTR",
|
||||||
field_names.M_CLIMATE: "M_CLT",
|
field_names.N_WORKFORCE: "N_WKFC",
|
||||||
field_names.M_ENERGY: "M_ENY",
|
field_names.N_CLIMATE: "N_CLT",
|
||||||
field_names.M_TRANSPORTATION: "M_TRN",
|
field_names.N_ENERGY: "N_ENY",
|
||||||
field_names.M_HOUSING: "M_HSG",
|
field_names.N_TRANSPORTATION: "N_TRN",
|
||||||
field_names.M_POLLUTION: "M_PLN",
|
field_names.N_HOUSING: "N_HSG",
|
||||||
field_names.M_HEALTH: "M_HLTH",
|
field_names.N_POLLUTION: "N_PLN",
|
||||||
field_names.SCORE_M_COMMUNITIES: "SM_C",
|
field_names.N_HEALTH: "N_HLTH",
|
||||||
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX: "SM_PFS",
|
# temporarily update this so that it's the Narwhal score that gets visualized on the map
|
||||||
field_names.EXPECTED_POPULATION_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EPLRLI",
|
# The NEW final score value INCLUDES the adjacency index.
|
||||||
field_names.EXPECTED_AGRICULTURE_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EALRLI",
|
field_names.FINAL_SCORE_N_BOOLEAN: "SN_C",
|
||||||
field_names.EXPECTED_BUILDING_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EBLRLI",
|
field_names.IS_TRIBAL_DAC: "SN_T",
|
||||||
field_names.PM25_EXPOSURE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "PM25LI",
|
field_names.DIABETES_LOW_INCOME_FIELD: "DLI",
|
||||||
field_names.ENERGY_BURDEN_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EBLI",
|
field_names.ASTHMA_LOW_INCOME_FIELD: "ALI",
|
||||||
field_names.DIESEL_PARTICULATE_MATTER_LOW_INCOME_LOW_HIGHER_ED_FIELD: "DPMLI",
|
field_names.POVERTY_LOW_HS_EDUCATION_FIELD: "PLHSE",
|
||||||
field_names.TRAFFIC_PROXIMITY_LOW_INCOME_LOW_HIGHER_ED_FIELD: "TPLI",
|
field_names.LOW_MEDIAN_INCOME_LOW_HS_EDUCATION_FIELD: "LMILHSE",
|
||||||
field_names.LEAD_PAINT_MEDIAN_HOUSE_VALUE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "LPMHVLI",
|
field_names.UNEMPLOYMENT_LOW_HS_EDUCATION_FIELD: "ULHSE",
|
||||||
field_names.HOUSING_BURDEN_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HBLI",
|
|
||||||
field_names.RMP_LOW_INCOME_LOW_HIGHER_ED_FIELD: "RMPLI",
|
|
||||||
field_names.SUPERFUND_LOW_INCOME_LOW_HIGHER_ED_FIELD: "SFLI",
|
|
||||||
field_names.HAZARDOUS_WASTE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HWLI",
|
|
||||||
field_names.WASTEWATER_DISCHARGE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "WDLI",
|
|
||||||
field_names.DIABETES_LOW_INCOME_LOW_HIGHER_ED_FIELD: "DLI",
|
|
||||||
field_names.ASTHMA_LOW_INCOME_LOW_HIGHER_ED_FIELD: "ALI",
|
|
||||||
field_names.HEART_DISEASE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HDLI",
|
|
||||||
field_names.LOW_LIFE_EXPECTANCY_LOW_INCOME_LOW_HIGHER_ED_FIELD: "LLELI",
|
|
||||||
field_names.LINGUISTIC_ISOLATION_LOW_HS_LOW_HIGHER_ED_FIELD: "LILHSE",
|
|
||||||
field_names.POVERTY_LOW_HS_LOW_HIGHER_ED_FIELD: "PLHSE",
|
|
||||||
field_names.LOW_MEDIAN_INCOME_LOW_HS_LOW_HIGHER_ED_FIELD: "LMILHSE",
|
|
||||||
field_names.UNEMPLOYMENT_LOW_HS_LOW_HIGHER_ED_FIELD: "ULHSE",
|
|
||||||
# new booleans only for the environmental factors
|
# new booleans only for the environmental factors
|
||||||
field_names.EXPECTED_POPULATION_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EPL_ET",
|
field_names.EXPECTED_POPULATION_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EPL_ET",
|
||||||
field_names.EXPECTED_AGRICULTURAL_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EAL_ET",
|
field_names.EXPECTED_AGRICULTURAL_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EAL_ET",
|
||||||
|
@ -248,11 +290,14 @@ TILES_SCORE_COLUMNS = {
|
||||||
field_names.DIESEL_EXCEEDS_PCTILE_THRESHOLD: "DS_ET",
|
field_names.DIESEL_EXCEEDS_PCTILE_THRESHOLD: "DS_ET",
|
||||||
field_names.TRAFFIC_PROXIMITY_PCTILE_THRESHOLD: "TP_ET",
|
field_names.TRAFFIC_PROXIMITY_PCTILE_THRESHOLD: "TP_ET",
|
||||||
field_names.LEAD_PAINT_PROXY_PCTILE_THRESHOLD: "LPP_ET",
|
field_names.LEAD_PAINT_PROXY_PCTILE_THRESHOLD: "LPP_ET",
|
||||||
|
field_names.HISTORIC_REDLINING_SCORE_EXCEEDED: "HRS_ET",
|
||||||
|
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_PCTILE_THRESHOLD: "KP_ET",
|
||||||
field_names.HOUSING_BURDEN_PCTILE_THRESHOLD: "HB_ET",
|
field_names.HOUSING_BURDEN_PCTILE_THRESHOLD: "HB_ET",
|
||||||
field_names.RMP_PCTILE_THRESHOLD: "RMP_ET",
|
field_names.RMP_PCTILE_THRESHOLD: "RMP_ET",
|
||||||
field_names.NPL_PCTILE_THRESHOLD: "NPL_ET",
|
field_names.NPL_PCTILE_THRESHOLD: "NPL_ET",
|
||||||
field_names.TSDF_PCTILE_THRESHOLD: "TSDF_ET",
|
field_names.TSDF_PCTILE_THRESHOLD: "TSDF_ET",
|
||||||
field_names.WASTEWATER_PCTILE_THRESHOLD: "WD_ET",
|
field_names.WASTEWATER_PCTILE_THRESHOLD: "WD_ET",
|
||||||
|
field_names.UST_PCTILE_THRESHOLD: "UST_ET",
|
||||||
field_names.DIABETES_PCTILE_THRESHOLD: "DB_ET",
|
field_names.DIABETES_PCTILE_THRESHOLD: "DB_ET",
|
||||||
field_names.ASTHMA_PCTILE_THRESHOLD: "A_ET",
|
field_names.ASTHMA_PCTILE_THRESHOLD: "A_ET",
|
||||||
field_names.HEART_DISEASE_PCTILE_THRESHOLD: "HD_ET",
|
field_names.HEART_DISEASE_PCTILE_THRESHOLD: "HD_ET",
|
||||||
|
@ -278,79 +323,56 @@ TILES_SCORE_COLUMNS = {
|
||||||
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
||||||
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
|
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX: "IAULHSE_PFS",
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "IAULHSE_PFS",
|
||||||
field_names.LOW_HS_EDUCATION_LOW_HIGHER_ED_FIELD: "LHE",
|
field_names.LOW_HS_EDUCATION_FIELD: "LHE",
|
||||||
field_names.ISLAND_AREAS_LOW_HS_EDUCATION_FIELD: "IALHE",
|
field_names.ISLAND_AREAS_LOW_HS_EDUCATION_FIELD: "IALHE",
|
||||||
# Percentage of HS Degree completion for Islands
|
# Percentage of HS Degree completion for Islands
|
||||||
field_names.CENSUS_DECENNIAL_HIGH_SCHOOL_ED_FIELD_2009: "IAHSEF",
|
field_names.CENSUS_DECENNIAL_HIGH_SCHOOL_ED_FIELD_2009: "IAHSEF",
|
||||||
field_names.COLLEGE_ATTENDANCE_FIELD: "CA",
|
|
||||||
field_names.COLLEGE_NON_ATTENDANCE_FIELD: "NCA",
|
|
||||||
# This is logically equivalent to "non-college greater than 80%"
|
|
||||||
field_names.COLLEGE_ATTENDANCE_LESS_THAN_20_FIELD: "CA_LT20",
|
|
||||||
# Booleans for the front end about the types of thresholds exceeded
|
# Booleans for the front end about the types of thresholds exceeded
|
||||||
field_names.CLIMATE_THRESHOLD_EXCEEDED: "M_CLT_EOMI",
|
field_names.CLIMATE_THRESHOLD_EXCEEDED: "N_CLT_EOMI",
|
||||||
field_names.ENERGY_THRESHOLD_EXCEEDED: "M_ENY_EOMI",
|
field_names.ENERGY_THRESHOLD_EXCEEDED: "N_ENY_EOMI",
|
||||||
field_names.TRAFFIC_THRESHOLD_EXCEEDED: "M_TRN_EOMI",
|
field_names.TRAFFIC_THRESHOLD_EXCEEDED: "N_TRN_EOMI",
|
||||||
field_names.HOUSING_THREHSOLD_EXCEEDED: "M_HSG_EOMI",
|
field_names.HOUSING_THREHSOLD_EXCEEDED: "N_HSG_EOMI",
|
||||||
field_names.POLLUTION_THRESHOLD_EXCEEDED: "M_PLN_EOMI",
|
field_names.POLLUTION_THRESHOLD_EXCEEDED: "N_PLN_EOMI",
|
||||||
field_names.WATER_THRESHOLD_EXCEEDED: "M_WTR_EOMI",
|
field_names.WATER_THRESHOLD_EXCEEDED: "N_WTR_EOMI",
|
||||||
field_names.HEALTH_THRESHOLD_EXCEEDED: "M_HLTH_EOMI",
|
field_names.HEALTH_THRESHOLD_EXCEEDED: "N_HLTH_EOMI",
|
||||||
field_names.WORKFORCE_THRESHOLD_EXCEEDED: "M_WKFC_EOMI",
|
field_names.WORKFORCE_THRESHOLD_EXCEEDED: "N_WKFC_EOMI",
|
||||||
# These are the booleans for socioeconomic indicators
|
# These are the booleans for socioeconomic indicators
|
||||||
## this measures low income boolean
|
## this measures low income boolean
|
||||||
field_names.FPL_200_SERIES: "FPL200S",
|
field_names.FPL_200_SERIES_IMPUTED_AND_ADJUSTED: "FPL200S",
|
||||||
## Low high school and low higher ed for t&wd
|
## Low high school for t&wd
|
||||||
field_names.WORKFORCE_SOCIO_INDICATORS_EXCEEDED: "M_WKFC_EBSI",
|
field_names.WORKFORCE_SOCIO_INDICATORS_EXCEEDED: "N_WKFC_EBSI",
|
||||||
## FPL 200 and low higher ed for all others
|
field_names.DOT_BURDEN_PCTILE_THRESHOLD: "TD_ET",
|
||||||
field_names.FPL_200_AND_COLLEGE_ATTENDANCE_SERIES: "M_EBSI",
|
field_names.DOT_TRAVEL_BURDEN_FIELD
|
||||||
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "TD_PFS",
|
||||||
|
field_names.FUTURE_FLOOD_RISK_FIELD
|
||||||
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "FLD_PFS",
|
||||||
|
field_names.FUTURE_WILDFIRE_RISK_FIELD
|
||||||
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "WFR_PFS",
|
||||||
|
field_names.HIGH_FUTURE_FLOOD_RISK_FIELD: "FLD_ET",
|
||||||
|
field_names.HIGH_FUTURE_WILDFIRE_RISK_FIELD: "WFR_ET",
|
||||||
|
field_names.ADJACENT_TRACT_SCORE_ABOVE_DONUT_THRESHOLD: "ADJ_ET",
|
||||||
|
field_names.TRACT_PERCENT_NON_NATURAL_FIELD_NAME
|
||||||
|
+ field_names.PERCENTILE_FIELD_SUFFIX: "IS_PFS",
|
||||||
|
field_names.NON_NATURAL_LOW_INCOME_FIELD_NAME: "IS_ET",
|
||||||
|
field_names.AML_BOOLEAN_FILLED_IN: "AML_ET",
|
||||||
|
field_names.ELIGIBLE_FUDS_BINARY_FIELD_NAME: "FUDS_RAW",
|
||||||
|
field_names.ELIGIBLE_FUDS_FILLED_IN_FIELD_NAME: "FUDS_ET",
|
||||||
|
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME: "IMP_FLG",
|
||||||
|
## FPL 200 and low higher ed for all others should no longer be M_EBSI, but rather
|
||||||
|
## FPL_200 (there is no higher ed in narwhal)
|
||||||
|
field_names.PERCENT_BLACK_FIELD_NAME: "DM_B",
|
||||||
|
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME: "DM_AI",
|
||||||
|
field_names.PERCENT_ASIAN_FIELD_NAME: "DM_A",
|
||||||
|
field_names.PERCENT_HAWAIIAN_FIELD_NAME: "DM_HI",
|
||||||
|
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME: "DM_T",
|
||||||
|
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME: "DM_W",
|
||||||
|
field_names.PERCENT_HISPANIC_FIELD_NAME: "DM_H",
|
||||||
|
field_names.PERCENT_OTHER_RACE_FIELD_NAME: "DM_O",
|
||||||
|
field_names.PERCENT_AGE_UNDER_10: "AGE_10",
|
||||||
|
field_names.PERCENT_AGE_10_TO_64: "AGE_MIDDLE",
|
||||||
|
field_names.PERCENT_AGE_OVER_64: "AGE_OLD",
|
||||||
|
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK: "TA_COUNT_AK",
|
||||||
|
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS: "TA_COUNT_C",
|
||||||
|
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT: "TA_PERC",
|
||||||
|
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY: "TA_PERC_FE",
|
||||||
}
|
}
|
||||||
|
|
||||||
# columns to round floats to 2 decimals
|
|
||||||
# TODO refactor to use much smaller subset of fields we DON'T want to round
|
|
||||||
TILES_SCORE_FLOAT_COLUMNS = [
|
|
||||||
field_names.DIABETES_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.ASTHMA_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.HEART_DISEASE_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.DIESEL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.ENERGY_BURDEN_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.EXPECTED_BUILDING_LOSS_RATE_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.EXPECTED_POPULATION_LOSS_RATE_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.HOUSING_BURDEN_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.LOW_LIFE_EXPECTANCY_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.LINGUISTIC_ISO_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.MEDIAN_HOUSE_VALUE_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.PM25_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.POVERTY_LESS_THAN_100_FPL_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.POVERTY_LESS_THAN_200_FPL_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.LEAD_PAINT_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.NPL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.RMP_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.TSDF_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.TRAFFIC_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.UNEMPLOYMENT_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
# Percentiles for Island areas' workforce columns
|
|
||||||
# To be clear: the island areas pull from 2009 census. PR does not.
|
|
||||||
field_names.LOW_CENSUS_DECENNIAL_AREA_MEDIAN_INCOME_PERCENT_FIELD_2009
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.CENSUS_DECENNIAL_POVERTY_LESS_THAN_100_FPL_FIELD_2009
|
|
||||||
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
|
||||||
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
|
|
||||||
+ field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
# Island areas HS degree attainment rate
|
|
||||||
field_names.CENSUS_DECENNIAL_HIGH_SCHOOL_ED_FIELD_2009,
|
|
||||||
field_names.LOW_HS_EDUCATION_LOW_HIGHER_ED_FIELD,
|
|
||||||
field_names.ISLAND_AREAS_LOW_HS_EDUCATION_FIELD,
|
|
||||||
field_names.WASTEWATER_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX,
|
|
||||||
field_names.COLLEGE_NON_ATTENDANCE_FIELD,
|
|
||||||
field_names.COLLEGE_ATTENDANCE_FIELD,
|
|
||||||
]
|
|
||||||
|
|
|
@ -1,17 +1,26 @@
|
||||||
import functools
|
import functools
|
||||||
from collections import namedtuple
|
from dataclasses import dataclass
|
||||||
|
from typing import List
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.score import constants
|
||||||
|
from data_pipeline.etl.sources.census_acs.etl import CensusACSETL
|
||||||
|
from data_pipeline.etl.sources.dot_travel_composite.etl import (
|
||||||
|
TravelCompositeETL,
|
||||||
|
)
|
||||||
|
from data_pipeline.etl.sources.eamlis.etl import AbandonedMineETL
|
||||||
|
from data_pipeline.etl.sources.fsf_flood_risk.etl import FloodRiskETL
|
||||||
|
from data_pipeline.etl.sources.fsf_wildfire_risk.etl import WildfireRiskETL
|
||||||
from data_pipeline.etl.sources.national_risk_index.etl import (
|
from data_pipeline.etl.sources.national_risk_index.etl import (
|
||||||
NationalRiskIndexETL,
|
NationalRiskIndexETL,
|
||||||
)
|
)
|
||||||
from data_pipeline.score.score_runner import ScoreRunner
|
from data_pipeline.etl.sources.nlcd_nature_deprived.etl import NatureDeprivedETL
|
||||||
|
from data_pipeline.etl.sources.tribal_overlap.etl import TribalOverlapETL
|
||||||
|
from data_pipeline.etl.sources.us_army_fuds.etl import USArmyFUDS
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.etl.score import constants
|
from data_pipeline.score.score_runner import ScoreRunner
|
||||||
|
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
@ -24,7 +33,7 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
# dataframes
|
# dataframes
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
self.ejscreen_df: pd.DataFrame
|
self.ejscreen_df: pd.DataFrame
|
||||||
self.census_df: pd.DataFrame
|
self.census_acs_df: pd.DataFrame
|
||||||
self.hud_housing_df: pd.DataFrame
|
self.hud_housing_df: pd.DataFrame
|
||||||
self.cdc_places_df: pd.DataFrame
|
self.cdc_places_df: pd.DataFrame
|
||||||
self.census_acs_median_incomes_df: pd.DataFrame
|
self.census_acs_median_incomes_df: pd.DataFrame
|
||||||
|
@ -32,18 +41,25 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
self.doe_energy_burden_df: pd.DataFrame
|
self.doe_energy_burden_df: pd.DataFrame
|
||||||
self.national_risk_index_df: pd.DataFrame
|
self.national_risk_index_df: pd.DataFrame
|
||||||
self.geocorr_urban_rural_df: pd.DataFrame
|
self.geocorr_urban_rural_df: pd.DataFrame
|
||||||
self.persistent_poverty_df: pd.DataFrame
|
|
||||||
self.census_decennial_df: pd.DataFrame
|
self.census_decennial_df: pd.DataFrame
|
||||||
self.census_2010_df: pd.DataFrame
|
self.census_2010_df: pd.DataFrame
|
||||||
self.child_opportunity_index_df: pd.DataFrame
|
self.national_tract_df: pd.DataFrame
|
||||||
|
self.hrs_df: pd.DataFrame
|
||||||
|
self.dot_travel_disadvantage_df: pd.DataFrame
|
||||||
|
self.fsf_flood_df: pd.DataFrame
|
||||||
|
self.fsf_fire_df: pd.DataFrame
|
||||||
|
self.nature_deprived_df: pd.DataFrame
|
||||||
|
self.eamlis_df: pd.DataFrame
|
||||||
|
self.fuds_df: pd.DataFrame
|
||||||
|
self.tribal_overlap_df: pd.DataFrame
|
||||||
|
|
||||||
|
self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS: List[str] = []
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
logger.info("Loading data sets from disk.")
|
logger.info("Loading data sets from disk.")
|
||||||
|
|
||||||
# EJSCreen csv Load
|
# EJSCreen csv Load
|
||||||
ejscreen_csv = (
|
ejscreen_csv = constants.DATA_PATH / "dataset" / "ejscreen" / "usa.csv"
|
||||||
constants.DATA_PATH / "dataset" / "ejscreen_2019" / "usa.csv"
|
|
||||||
)
|
|
||||||
self.ejscreen_df = pd.read_csv(
|
self.ejscreen_df = pd.read_csv(
|
||||||
ejscreen_csv,
|
ejscreen_csv,
|
||||||
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
||||||
|
@ -51,14 +67,7 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
# Load census data
|
# Load census data
|
||||||
census_csv = (
|
self.census_acs_df = CensusACSETL.get_data_frame()
|
||||||
constants.DATA_PATH / "dataset" / "census_acs_2019" / "usa.csv"
|
|
||||||
)
|
|
||||||
self.census_df = pd.read_csv(
|
|
||||||
census_csv,
|
|
||||||
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
|
||||||
low_memory=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Load HUD housing data
|
# Load HUD housing data
|
||||||
hud_housing_csv = (
|
hud_housing_csv = (
|
||||||
|
@ -116,6 +125,27 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
# Load FEMA national risk index data
|
# Load FEMA national risk index data
|
||||||
self.national_risk_index_df = NationalRiskIndexETL.get_data_frame()
|
self.national_risk_index_df = NationalRiskIndexETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load DOT Travel Disadvantage
|
||||||
|
self.dot_travel_disadvantage_df = TravelCompositeETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load fire risk data
|
||||||
|
self.fsf_fire_df = WildfireRiskETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load flood risk data
|
||||||
|
self.fsf_flood_df = FloodRiskETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load NLCD Nature-Deprived Communities data
|
||||||
|
self.nature_deprived_df = NatureDeprivedETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load eAMLIS dataset
|
||||||
|
self.eamlis_df = AbandonedMineETL.get_data_frame()
|
||||||
|
|
||||||
|
# Load FUDS dataset
|
||||||
|
self.fuds_df = USArmyFUDS.get_data_frame()
|
||||||
|
|
||||||
|
# Load Tribal overlap dataset
|
||||||
|
self.tribal_overlap_df = TribalOverlapETL.get_data_frame()
|
||||||
|
|
||||||
# Load GeoCorr Urban Rural Map
|
# Load GeoCorr Urban Rural Map
|
||||||
geocorr_urban_rural_csv = (
|
geocorr_urban_rural_csv = (
|
||||||
constants.DATA_PATH / "dataset" / "geocorr" / "usa.csv"
|
constants.DATA_PATH / "dataset" / "geocorr" / "usa.csv"
|
||||||
|
@ -126,16 +156,6 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Load persistent poverty
|
|
||||||
persistent_poverty_csv = (
|
|
||||||
constants.DATA_PATH / "dataset" / "persistent_poverty" / "usa.csv"
|
|
||||||
)
|
|
||||||
self.persistent_poverty_df = pd.read_csv(
|
|
||||||
persistent_poverty_csv,
|
|
||||||
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
|
||||||
low_memory=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Load decennial census data
|
# Load decennial census data
|
||||||
census_decennial_csv = (
|
census_decennial_csv = (
|
||||||
constants.DATA_PATH
|
constants.DATA_PATH
|
||||||
|
@ -159,19 +179,26 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Load COI data
|
# Load HRS data
|
||||||
child_opportunity_index_csv = (
|
hrs_csv = (
|
||||||
constants.DATA_PATH
|
constants.DATA_PATH / "dataset" / "historic_redlining" / "usa.csv"
|
||||||
/ "dataset"
|
|
||||||
/ "child_opportunity_index"
|
|
||||||
/ "usa.csv"
|
|
||||||
)
|
)
|
||||||
self.child_opportunity_index_df = pd.read_csv(
|
|
||||||
child_opportunity_index_csv,
|
self.hrs_df = pd.read_csv(
|
||||||
|
hrs_csv,
|
||||||
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
national_tract_csv = constants.DATA_CENSUS_CSV_FILE_PATH
|
||||||
|
self.national_tract_df = pd.read_csv(
|
||||||
|
national_tract_csv,
|
||||||
|
names=[self.GEOID_TRACT_FIELD_NAME],
|
||||||
|
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
||||||
|
low_memory=False,
|
||||||
|
header=None,
|
||||||
|
)
|
||||||
|
|
||||||
def _join_tract_dfs(self, census_tract_dfs: list) -> pd.DataFrame:
|
def _join_tract_dfs(self, census_tract_dfs: list) -> pd.DataFrame:
|
||||||
logger.info("Joining Census Tract dataframes")
|
logger.info("Joining Census Tract dataframes")
|
||||||
|
|
||||||
|
@ -253,6 +280,7 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
df: pd.DataFrame,
|
df: pd.DataFrame,
|
||||||
input_column_name: str,
|
input_column_name: str,
|
||||||
output_column_name_root: str,
|
output_column_name_root: str,
|
||||||
|
drop_tracts: list = None,
|
||||||
ascending: bool = True,
|
ascending: bool = True,
|
||||||
) -> pd.DataFrame:
|
) -> pd.DataFrame:
|
||||||
"""Creates percentiles.
|
"""Creates percentiles.
|
||||||
|
@ -262,98 +290,46 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
E.g., "PM2.5 exposure (percentile)".
|
E.g., "PM2.5 exposure (percentile)".
|
||||||
This will be for the entire country.
|
This will be for the entire country.
|
||||||
|
|
||||||
For an "apples-to-apples" comparison of urban tracts to other urban tracts,
|
|
||||||
and compare rural tracts to other rural tracts.
|
|
||||||
|
|
||||||
This percentile will be created and returned as
|
|
||||||
f"{output_column_name_root}{field_names.PERCENTILE_URBAN_RURAL_FIELD_SUFFIX}".
|
|
||||||
E.g., "PM2.5 exposure (percentile urban/rural)".
|
|
||||||
This field exists for every tract, but for urban tracts this value will be the
|
|
||||||
percentile compared to other urban tracts, and for rural tracts this value
|
|
||||||
will be the percentile compared to other rural tracts.
|
|
||||||
|
|
||||||
Specific methdology:
|
|
||||||
1. Decide a methodology for confirming whether a tract counts as urban or
|
|
||||||
rural. Currently in the codebase, we use Geocorr to identify the % rural of
|
|
||||||
a tract, and mark the tract as rural if the percentage is >50% and urban
|
|
||||||
otherwise. This may or may not be the right methodology.
|
|
||||||
2. Once tracts are marked as urban or rural, create one percentile rank
|
|
||||||
that only ranks urban tracts, and one percentile rank that only ranks rural
|
|
||||||
tracts.
|
|
||||||
3. Combine into a single field.
|
|
||||||
|
|
||||||
`output_column_name_root` is different from `input_column_name` to enable the
|
`output_column_name_root` is different from `input_column_name` to enable the
|
||||||
reverse percentile use case. In that use case, `input_column_name` may be
|
reverse percentile use case. In that use case, `input_column_name` may be
|
||||||
something like "3rd grade reading proficiency" and `output_column_name_root`
|
something like "3rd grade reading proficiency" and `output_column_name_root`
|
||||||
may be something like "Low 3rd grade reading proficiency".
|
may be something like "Low 3rd grade reading proficiency".
|
||||||
"""
|
"""
|
||||||
if (
|
|
||||||
output_column_name_root
|
# We have two potential options for assessing how to calculate percentiles.
|
||||||
!= field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
|
# For the vast majority of columns, we will simply calculate percentiles overall.
|
||||||
):
|
# However, for Linguistic Isolation and Agricultural Value Loss, there exist conditions
|
||||||
|
# for which we drop out tracts from consideration in the percentile. More details on those
|
||||||
|
# are below, for them, we provide a list of tracts to not include.
|
||||||
|
# Because of the fancy transformations below, I have removed the urban / rural percentiles,
|
||||||
|
# which are now deprecated.
|
||||||
|
if not drop_tracts:
|
||||||
# Create the "basic" percentile.
|
# Create the "basic" percentile.
|
||||||
|
## note: I believe this is less performant than if we made a bunch of these PFS columns
|
||||||
|
## and then concatenated the list. For the refactor!
|
||||||
df[
|
df[
|
||||||
f"{output_column_name_root}"
|
f"{output_column_name_root}"
|
||||||
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
|
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
|
||||||
] = df[input_column_name].rank(pct=True, ascending=ascending)
|
] = df[input_column_name].rank(pct=True, ascending=ascending)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# For agricultural loss, we are using whether there is value at all to determine percentile and then
|
tmp_series = df[input_column_name].where(
|
||||||
# filling places where the value is False with 0
|
~df[field_names.GEOID_TRACT_FIELD].isin(drop_tracts),
|
||||||
|
np.nan,
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
f"Creating special case column for percentiles from {input_column_name}"
|
||||||
|
)
|
||||||
df[
|
df[
|
||||||
f"{output_column_name_root}"
|
f"{output_column_name_root}"
|
||||||
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
|
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
|
||||||
] = (
|
] = tmp_series.rank(ascending=ascending, pct=True)
|
||||||
df.where(
|
|
||||||
df[field_names.AGRICULTURAL_VALUE_BOOL_FIELD].astype(float)
|
|
||||||
== 1.0
|
|
||||||
)[input_column_name]
|
|
||||||
.rank(ascending=ascending, pct=True)
|
|
||||||
.fillna(
|
|
||||||
df[field_names.AGRICULTURAL_VALUE_BOOL_FIELD].astype(float)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create the urban/rural percentiles.
|
# Check that "drop tracts" were dropped (quicker than creating a fixture?)
|
||||||
urban_rural_percentile_fields_to_combine = []
|
assert df[df[field_names.GEOID_TRACT_FIELD].isin(drop_tracts)][
|
||||||
for (urban_or_rural_string, urban_heuristic_bool) in [
|
f"{output_column_name_root}"
|
||||||
("urban", True),
|
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
|
||||||
("rural", False),
|
].isna().sum() == len(drop_tracts), "Not all tracts were dropped"
|
||||||
]:
|
|
||||||
# Create a field with only those values
|
|
||||||
this_category_only_value_field = (
|
|
||||||
f"{input_column_name} (value {urban_or_rural_string} only)"
|
|
||||||
)
|
|
||||||
df[this_category_only_value_field] = np.where(
|
|
||||||
df[field_names.URBAN_HEURISTIC_FIELD] == urban_heuristic_bool,
|
|
||||||
df[input_column_name],
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Calculate the percentile for only this category
|
|
||||||
this_category_only_percentile_field = (
|
|
||||||
f"{output_column_name_root} "
|
|
||||||
f"(percentile {urban_or_rural_string} only)"
|
|
||||||
)
|
|
||||||
df[this_category_only_percentile_field] = df[
|
|
||||||
this_category_only_value_field
|
|
||||||
].rank(
|
|
||||||
pct=True,
|
|
||||||
# Set ascending to the parameter value.
|
|
||||||
ascending=ascending,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add the field name to this list. Later, we'll combine this list.
|
|
||||||
urban_rural_percentile_fields_to_combine.append(
|
|
||||||
this_category_only_percentile_field
|
|
||||||
)
|
|
||||||
|
|
||||||
# Combine both urban and rural into one field:
|
|
||||||
df[
|
|
||||||
f"{output_column_name_root}{field_names.PERCENTILE_URBAN_RURAL_FIELD_SUFFIX}"
|
|
||||||
] = df[urban_rural_percentile_fields_to_combine].mean(
|
|
||||||
axis=1, skipna=True
|
|
||||||
)
|
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
|
@ -363,19 +339,25 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
|
|
||||||
# Join all the data sources that use census tracts
|
# Join all the data sources that use census tracts
|
||||||
census_tract_dfs = [
|
census_tract_dfs = [
|
||||||
self.census_df,
|
self.census_acs_df,
|
||||||
self.hud_housing_df,
|
self.hud_housing_df,
|
||||||
self.cdc_places_df,
|
self.cdc_places_df,
|
||||||
self.cdc_life_expectancy_df,
|
self.cdc_life_expectancy_df,
|
||||||
self.doe_energy_burden_df,
|
self.doe_energy_burden_df,
|
||||||
self.ejscreen_df,
|
self.ejscreen_df,
|
||||||
self.geocorr_urban_rural_df,
|
self.geocorr_urban_rural_df,
|
||||||
self.persistent_poverty_df,
|
|
||||||
self.national_risk_index_df,
|
self.national_risk_index_df,
|
||||||
self.census_acs_median_incomes_df,
|
self.census_acs_median_incomes_df,
|
||||||
self.census_decennial_df,
|
self.census_decennial_df,
|
||||||
self.census_2010_df,
|
self.census_2010_df,
|
||||||
self.child_opportunity_index_df,
|
self.hrs_df,
|
||||||
|
self.dot_travel_disadvantage_df,
|
||||||
|
self.fsf_flood_df,
|
||||||
|
self.fsf_fire_df,
|
||||||
|
self.nature_deprived_df,
|
||||||
|
self.eamlis_df,
|
||||||
|
self.fuds_df,
|
||||||
|
self.tribal_overlap_df,
|
||||||
]
|
]
|
||||||
|
|
||||||
# Sanity check each data frame before merging.
|
# Sanity check each data frame before merging.
|
||||||
|
@ -384,8 +366,22 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
|
|
||||||
census_tract_df = self._join_tract_dfs(census_tract_dfs)
|
census_tract_df = self._join_tract_dfs(census_tract_dfs)
|
||||||
|
|
||||||
# If GEOID10s are read as numbers instead of strings, the initial 0 is dropped,
|
# Drop tracts that don't exist in the 2010 tracts
|
||||||
# and then we get too many CBG rows (one for 012345 and one for 12345).
|
pre_join_len = census_tract_df[field_names.GEOID_TRACT_FIELD].nunique()
|
||||||
|
|
||||||
|
census_tract_df = census_tract_df.merge(
|
||||||
|
self.national_tract_df,
|
||||||
|
on="GEOID10_TRACT",
|
||||||
|
how="inner",
|
||||||
|
)
|
||||||
|
assert (
|
||||||
|
census_tract_df.shape[0] <= pre_join_len
|
||||||
|
), "Join against national tract list ADDED rows"
|
||||||
|
logger.info(
|
||||||
|
"Dropped %s tracts not in the 2010 tract data",
|
||||||
|
pre_join_len
|
||||||
|
- census_tract_df[field_names.GEOID_TRACT_FIELD].nunique(),
|
||||||
|
)
|
||||||
|
|
||||||
# Now sanity-check the merged df.
|
# Now sanity-check the merged df.
|
||||||
self._census_tract_df_sanity_check(
|
self._census_tract_df_sanity_check(
|
||||||
|
@ -405,8 +401,29 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
df[field_names.MEDIAN_INCOME_FIELD] / df[field_names.AMI_FIELD]
|
df[field_names.MEDIAN_INCOME_FIELD] / df[field_names.AMI_FIELD]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS = [
|
||||||
|
field_names.PERCENT_BLACK_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_ASIAN_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_HAWAIIAN_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_HISPANIC_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
field_names.PERCENT_OTHER_RACE_FIELD_NAME
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
|
||||||
|
]
|
||||||
|
|
||||||
|
# Donut columns get added later
|
||||||
numeric_columns = [
|
numeric_columns = [
|
||||||
field_names.HOUSING_BURDEN_FIELD,
|
field_names.HOUSING_BURDEN_FIELD,
|
||||||
|
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD,
|
||||||
field_names.TOTAL_POP_FIELD,
|
field_names.TOTAL_POP_FIELD,
|
||||||
field_names.MEDIAN_INCOME_AS_PERCENT_OF_STATE_FIELD,
|
field_names.MEDIAN_INCOME_AS_PERCENT_OF_STATE_FIELD,
|
||||||
field_names.ASTHMA_FIELD,
|
field_names.ASTHMA_FIELD,
|
||||||
|
@ -453,27 +470,55 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
field_names.CENSUS_UNEMPLOYMENT_FIELD_2010,
|
field_names.CENSUS_UNEMPLOYMENT_FIELD_2010,
|
||||||
field_names.CENSUS_POVERTY_LESS_THAN_100_FPL_FIELD_2010,
|
field_names.CENSUS_POVERTY_LESS_THAN_100_FPL_FIELD_2010,
|
||||||
field_names.CENSUS_DECENNIAL_TOTAL_POPULATION_FIELD_2009,
|
field_names.CENSUS_DECENNIAL_TOTAL_POPULATION_FIELD_2009,
|
||||||
field_names.EXTREME_HEAT_FIELD,
|
field_names.UST_FIELD,
|
||||||
field_names.HEALTHY_FOOD_FIELD,
|
field_names.DOT_TRAVEL_BURDEN_FIELD,
|
||||||
field_names.IMPENETRABLE_SURFACES_FIELD,
|
field_names.FUTURE_FLOOD_RISK_FIELD,
|
||||||
# We have to pass this boolean here in order to include it in ag value loss percentiles.
|
field_names.FUTURE_WILDFIRE_RISK_FIELD,
|
||||||
field_names.AGRICULTURAL_VALUE_BOOL_FIELD,
|
field_names.TRACT_PERCENT_NON_NATURAL_FIELD_NAME,
|
||||||
]
|
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
|
||||||
|
field_names.PERCENT_BLACK_FIELD_NAME,
|
||||||
|
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME,
|
||||||
|
field_names.PERCENT_ASIAN_FIELD_NAME,
|
||||||
|
field_names.PERCENT_HAWAIIAN_FIELD_NAME,
|
||||||
|
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME,
|
||||||
|
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
|
field_names.PERCENT_HISPANIC_FIELD_NAME,
|
||||||
|
field_names.PERCENT_OTHER_RACE_FIELD_NAME,
|
||||||
|
field_names.PERCENT_AGE_UNDER_10,
|
||||||
|
field_names.PERCENT_AGE_10_TO_64,
|
||||||
|
field_names.PERCENT_AGE_OVER_64,
|
||||||
|
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT,
|
||||||
|
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK,
|
||||||
|
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS,
|
||||||
|
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY,
|
||||||
|
] + self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS
|
||||||
|
|
||||||
non_numeric_columns = [
|
non_numeric_columns = [
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
field_names.PERSISTENT_POVERTY_FIELD,
|
field_names.TRACT_ELIGIBLE_FOR_NONNATURAL_THRESHOLD,
|
||||||
|
field_names.AGRICULTURAL_VALUE_BOOL_FIELD,
|
||||||
|
field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT,
|
||||||
|
]
|
||||||
|
|
||||||
|
boolean_columns = [
|
||||||
|
field_names.AML_BOOLEAN,
|
||||||
|
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME,
|
||||||
|
field_names.ELIGIBLE_FUDS_BINARY_FIELD_NAME,
|
||||||
|
field_names.HISTORIC_REDLINING_SCORE_EXCEEDED,
|
||||||
|
field_names.IS_TRIBAL_DAC,
|
||||||
]
|
]
|
||||||
|
|
||||||
# For some columns, high values are "good", so we want to reverse the percentile
|
# For some columns, high values are "good", so we want to reverse the percentile
|
||||||
# so that high values are "bad" and any scoring logic can still check if it's
|
# so that high values are "bad" and any scoring logic can still check if it's
|
||||||
# >= some threshold.
|
# >= some threshold.
|
||||||
|
# Note that we must use dataclass here instead of namedtuples on account of pylint
|
||||||
# TODO: Add more fields here.
|
# TODO: Add more fields here.
|
||||||
# https://github.com/usds/justice40-tool/issues/970
|
# https://github.com/usds/justice40-tool/issues/970
|
||||||
ReversePercentile = namedtuple(
|
@dataclass
|
||||||
typename="ReversePercentile",
|
class ReversePercentile:
|
||||||
field_names=["field_name", "low_field_name"],
|
field_name: str
|
||||||
)
|
low_field_name: str
|
||||||
|
|
||||||
reverse_percentiles = [
|
reverse_percentiles = [
|
||||||
# This dictionary follows the format:
|
# This dictionary follows the format:
|
||||||
# <field name> : <field name for low values>
|
# <field name> : <field name for low values>
|
||||||
|
@ -481,10 +526,6 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
# This low field will not exist yet, it is only calculated for the
|
# This low field will not exist yet, it is only calculated for the
|
||||||
# percentile.
|
# percentile.
|
||||||
# TODO: This will come from the YAML dataset config
|
# TODO: This will come from the YAML dataset config
|
||||||
ReversePercentile(
|
|
||||||
field_name=field_names.READING_FIELD,
|
|
||||||
low_field_name=field_names.LOW_READING_FIELD,
|
|
||||||
),
|
|
||||||
ReversePercentile(
|
ReversePercentile(
|
||||||
field_name=field_names.MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
|
field_name=field_names.MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
|
||||||
low_field_name=field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
|
low_field_name=field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
|
||||||
|
@ -503,40 +544,90 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
non_numeric_columns
|
non_numeric_columns
|
||||||
+ numeric_columns
|
+ numeric_columns
|
||||||
+ [rp.field_name for rp in reverse_percentiles]
|
+ [rp.field_name for rp in reverse_percentiles]
|
||||||
|
+ boolean_columns
|
||||||
)
|
)
|
||||||
|
|
||||||
df_copy = df[columns_to_keep].copy()
|
df_copy = df[columns_to_keep].copy()
|
||||||
|
|
||||||
|
assert len(numeric_columns) == len(
|
||||||
|
set(numeric_columns)
|
||||||
|
), "You have a double-entered column in the numeric columns list"
|
||||||
|
|
||||||
df_copy[numeric_columns] = df_copy[numeric_columns].apply(pd.to_numeric)
|
df_copy[numeric_columns] = df_copy[numeric_columns].apply(pd.to_numeric)
|
||||||
|
|
||||||
|
# coerce all booleans to bools preserving nan character
|
||||||
|
# since this is a boolean, need to use `None`
|
||||||
|
for col in boolean_columns:
|
||||||
|
tmp = df_copy[col].copy()
|
||||||
|
df_copy[col] = np.where(tmp.notna(), tmp.astype(bool), None)
|
||||||
|
logger.info(f"{col} contains {df_copy[col].isna().sum()} nulls.")
|
||||||
|
|
||||||
# Convert all columns to numeric and do math
|
# Convert all columns to numeric and do math
|
||||||
|
# Note that we have a few special conditions here and we handle them explicitly.
|
||||||
|
# For *Linguistic Isolation*, we do NOT want to include Puerto Rico in the percentile
|
||||||
|
# calculation. This is because linguistic isolation as a category doesn't make much sense
|
||||||
|
# in Puerto Rico, where Spanish is a recognized language. Thus, we construct a list
|
||||||
|
# of tracts to drop from the percentile calculation.
|
||||||
|
#
|
||||||
|
# For *Expected Agricultural Loss*, we only want to include in the percentile tracts
|
||||||
|
# in which there is some agricultural value. This helps us adjust the data such that we have
|
||||||
|
# the ability to discern which tracts truly are at the 90th percentile, since many tracts have 0 value.
|
||||||
|
#
|
||||||
|
# For *Non-Natural Space*, we may only want to include tracts that have at least 35 acreas, I think. This will
|
||||||
|
# get rid of tracts that we think are aberrations statistically. Right now, we have left this out
|
||||||
|
# pending ground-truthing.
|
||||||
|
#
|
||||||
|
# For *Traffic Barriers*, we want to exclude low population tracts, which may have high burden because they are
|
||||||
|
# low population alone. We set this low population constant in the if statement.
|
||||||
|
|
||||||
for numeric_column in numeric_columns:
|
for numeric_column in numeric_columns:
|
||||||
|
drop_tracts = []
|
||||||
|
if (
|
||||||
|
numeric_column
|
||||||
|
== field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
|
||||||
|
):
|
||||||
|
drop_tracts = df_copy[
|
||||||
|
~df_copy[field_names.AGRICULTURAL_VALUE_BOOL_FIELD]
|
||||||
|
.astype(bool)
|
||||||
|
.fillna(False)
|
||||||
|
][field_names.GEOID_TRACT_FIELD].to_list()
|
||||||
|
logger.info(
|
||||||
|
f"Dropping {len(drop_tracts)} tracts from Agricultural Value Loss"
|
||||||
|
)
|
||||||
|
elif numeric_column == field_names.LINGUISTIC_ISO_FIELD:
|
||||||
|
drop_tracts = df_copy[
|
||||||
|
# 72 is the FIPS code for Puerto Rico
|
||||||
|
df_copy[field_names.GEOID_TRACT_FIELD].str.startswith("72")
|
||||||
|
][field_names.GEOID_TRACT_FIELD].to_list()
|
||||||
|
logger.info(
|
||||||
|
f"Dropping {len(drop_tracts)} tracts from Linguistic Isolation"
|
||||||
|
)
|
||||||
|
|
||||||
|
elif numeric_column in [
|
||||||
|
field_names.DOT_TRAVEL_BURDEN_FIELD,
|
||||||
|
field_names.EXPECTED_POPULATION_LOSS_RATE_FIELD,
|
||||||
|
]:
|
||||||
|
# Not having any people appears to be correlated with transit burden, but also doesn't represent
|
||||||
|
# on the ground need. For now, we remove these tracts from the percentile calculation.ß
|
||||||
|
# Similarly, we want to exclude low population tracts from FEMA's index
|
||||||
|
low_population = 20
|
||||||
|
drop_tracts = df_copy[
|
||||||
|
df_copy[field_names.TOTAL_POP_FIELD].fillna(0)
|
||||||
|
<= low_population
|
||||||
|
][field_names.GEOID_TRACT_FIELD].to_list()
|
||||||
|
logger.info(
|
||||||
|
f"Dropping {len(drop_tracts)} tracts from DOT traffic burden"
|
||||||
|
)
|
||||||
|
|
||||||
df_copy = self._add_percentiles_to_df(
|
df_copy = self._add_percentiles_to_df(
|
||||||
df=df_copy,
|
df=df_copy,
|
||||||
input_column_name=numeric_column,
|
input_column_name=numeric_column,
|
||||||
# For this use case, the input name and output name root are the same.
|
# For this use case, the input name and output name root are the same.
|
||||||
output_column_name_root=numeric_column,
|
output_column_name_root=numeric_column,
|
||||||
ascending=True,
|
ascending=True,
|
||||||
|
drop_tracts=drop_tracts,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Min-max normalization:
|
|
||||||
# (
|
|
||||||
# Observed value
|
|
||||||
# - minimum of all values
|
|
||||||
# )
|
|
||||||
# divided by
|
|
||||||
# (
|
|
||||||
# Maximum of all values
|
|
||||||
# - minimum of all values
|
|
||||||
# )
|
|
||||||
min_value = df_copy[numeric_column].min(skipna=True)
|
|
||||||
|
|
||||||
max_value = df_copy[numeric_column].max(skipna=True)
|
|
||||||
|
|
||||||
df_copy[f"{numeric_column}{field_names.MIN_MAX_FIELD_SUFFIX}"] = (
|
|
||||||
df_copy[numeric_column] - min_value
|
|
||||||
) / (max_value - min_value)
|
|
||||||
|
|
||||||
# Create reversed percentiles for these fields
|
# Create reversed percentiles for these fields
|
||||||
for reverse_percentile in reverse_percentiles:
|
for reverse_percentile in reverse_percentiles:
|
||||||
# Calculate reverse percentiles
|
# Calculate reverse percentiles
|
||||||
|
@ -566,6 +657,32 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
|
|
||||||
return df_copy
|
return df_copy
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _get_island_areas(df: pd.DataFrame) -> pd.Series:
|
||||||
|
return (
|
||||||
|
df[field_names.GEOID_TRACT_FIELD]
|
||||||
|
.str[:2]
|
||||||
|
.isin(constants.TILES_ISLAND_AREA_FIPS_CODES)
|
||||||
|
)
|
||||||
|
|
||||||
|
def _backfill_island_demographics(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||||
|
logger.info("Backfilling island demographic data")
|
||||||
|
island_index = self._get_island_areas(df)
|
||||||
|
for backfill_field_name in self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS:
|
||||||
|
actual_field_name = backfill_field_name.replace(
|
||||||
|
field_names.ISLAND_AREA_BACKFILL_SUFFIX, ""
|
||||||
|
)
|
||||||
|
df.loc[island_index, actual_field_name] = df.loc[
|
||||||
|
island_index, backfill_field_name
|
||||||
|
]
|
||||||
|
df = df.drop(columns=self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS)
|
||||||
|
|
||||||
|
df.loc[island_index, field_names.TOTAL_POP_FIELD] = df.loc[
|
||||||
|
island_index, field_names.COMBINED_CENSUS_TOTAL_POPULATION_2010
|
||||||
|
]
|
||||||
|
|
||||||
|
return df
|
||||||
|
|
||||||
def transform(self) -> None:
|
def transform(self) -> None:
|
||||||
logger.info("Transforming Score Data")
|
logger.info("Transforming Score Data")
|
||||||
|
|
||||||
|
@ -575,8 +692,13 @@ class ScoreETL(ExtractTransformLoad):
|
||||||
# calculate scores
|
# calculate scores
|
||||||
self.df = ScoreRunner(df=self.df).calculate_scores()
|
self.df = ScoreRunner(df=self.df).calculate_scores()
|
||||||
|
|
||||||
|
# We add island demographic data since it doesn't matter to the score anyway
|
||||||
|
self.df = self._backfill_island_demographics(self.df)
|
||||||
|
|
||||||
def load(self) -> None:
|
def load(self) -> None:
|
||||||
logger.info("Saving Score CSV")
|
logger.info(
|
||||||
|
f"Saving Score CSV to {constants.DATA_SCORE_CSV_FULL_FILE_PATH}."
|
||||||
|
)
|
||||||
constants.DATA_SCORE_CSV_FULL_DIR.mkdir(parents=True, exist_ok=True)
|
constants.DATA_SCORE_CSV_FULL_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
self.df.to_csv(constants.DATA_SCORE_CSV_FULL_FILE_PATH, index=False)
|
self.df.to_csv(constants.DATA_SCORE_CSV_FULL_FILE_PATH, index=False)
|
||||||
|
|
|
@ -1,24 +1,20 @@
|
||||||
import concurrent.futures
|
import concurrent.futures
|
||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
import geopandas as gpd
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
import geopandas as gpd
|
from data_pipeline.content.schemas.download_schemas import CSVConfig
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.etl.score import constants
|
from data_pipeline.etl.score import constants
|
||||||
from data_pipeline.etl.sources.census.etl_utils import (
|
|
||||||
check_census_data_source,
|
|
||||||
)
|
|
||||||
from data_pipeline.etl.score.etl_utils import check_score_data_source
|
from data_pipeline.etl.score.etl_utils import check_score_data_source
|
||||||
|
from data_pipeline.etl.sources.census.etl_utils import check_census_data_source
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.content.schemas.download_schemas import CSVConfig
|
from data_pipeline.utils import get_module_logger
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.utils import load_dict_from_yaml_object_fields
|
||||||
get_module_logger,
|
from data_pipeline.utils import load_yaml_dict_from_file
|
||||||
zip_files,
|
from data_pipeline.utils import zip_files
|
||||||
load_yaml_dict_from_file,
|
|
||||||
load_dict_from_yaml_object_fields,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -41,23 +37,25 @@ class GeoScoreETL(ExtractTransformLoad):
|
||||||
self.SCORE_CSV_PATH = self.DATA_PATH / "score" / "csv"
|
self.SCORE_CSV_PATH = self.DATA_PATH / "score" / "csv"
|
||||||
self.TILE_SCORE_CSV = self.SCORE_CSV_PATH / "tiles" / "usa.csv"
|
self.TILE_SCORE_CSV = self.SCORE_CSV_PATH / "tiles" / "usa.csv"
|
||||||
|
|
||||||
self.DATA_SOURCE = data_source
|
|
||||||
self.CENSUS_USA_GEOJSON = (
|
self.CENSUS_USA_GEOJSON = (
|
||||||
self.DATA_PATH / "census" / "geojson" / "us.json"
|
self.DATA_PATH / "census" / "geojson" / "us.json"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Import the shortened name for Score M percentile ("SM_PFS") that's used on the
|
# Import the shortened name for Score N to be used on tiles.
|
||||||
# tiles.
|
# We should no longer be using PFS
|
||||||
|
|
||||||
|
## TODO: We really should not have this any longer changing
|
||||||
self.TARGET_SCORE_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
|
self.TARGET_SCORE_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
|
||||||
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX
|
field_names.FINAL_SCORE_N_BOOLEAN
|
||||||
]
|
]
|
||||||
self.TARGET_SCORE_RENAME_TO = "M_SCORE"
|
self.TARGET_SCORE_RENAME_TO = "SCORE"
|
||||||
|
|
||||||
# Import the shortened name for tract ("GTF") that's used on the tiles.
|
# Import the shortened name for tract ("GTF") that's used on the tiles.
|
||||||
self.TRACT_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
|
self.TRACT_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
|
||||||
field_names.GEOID_TRACT_FIELD
|
field_names.GEOID_TRACT_FIELD
|
||||||
]
|
]
|
||||||
self.GEOMETRY_FIELD_NAME = "geometry"
|
self.GEOMETRY_FIELD_NAME = "geometry"
|
||||||
|
self.LAND_FIELD_NAME = "ALAND10"
|
||||||
|
|
||||||
# We will adjust this upwards while there is some fractional value
|
# We will adjust this upwards while there is some fractional value
|
||||||
# in the score. This is a starting value.
|
# in the score. This is a starting value.
|
||||||
|
@ -84,17 +82,28 @@ class GeoScoreETL(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info("Reading US GeoJSON (~6 minutes)")
|
logger.info("Reading US GeoJSON (~6 minutes)")
|
||||||
self.geojson_usa_df = gpd.read_file(
|
full_geojson_usa_df = gpd.read_file(
|
||||||
self.CENSUS_USA_GEOJSON,
|
self.CENSUS_USA_GEOJSON,
|
||||||
dtype={self.GEOID_FIELD_NAME: "string"},
|
dtype={self.GEOID_FIELD_NAME: "string"},
|
||||||
usecols=[self.GEOID_FIELD_NAME, self.GEOMETRY_FIELD_NAME],
|
usecols=[
|
||||||
|
self.GEOID_FIELD_NAME,
|
||||||
|
self.GEOMETRY_FIELD_NAME,
|
||||||
|
self.LAND_FIELD_NAME,
|
||||||
|
],
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# We only want to keep tracts to visualize that have non-0 land
|
||||||
|
self.geojson_usa_df = full_geojson_usa_df[
|
||||||
|
full_geojson_usa_df[self.LAND_FIELD_NAME] > 0
|
||||||
|
]
|
||||||
|
|
||||||
logger.info("Reading score CSV")
|
logger.info("Reading score CSV")
|
||||||
self.score_usa_df = pd.read_csv(
|
self.score_usa_df = pd.read_csv(
|
||||||
self.TILE_SCORE_CSV,
|
self.TILE_SCORE_CSV,
|
||||||
dtype={self.TRACT_SHORT_FIELD: "string"},
|
dtype={
|
||||||
|
self.TRACT_SHORT_FIELD: str,
|
||||||
|
},
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -134,7 +143,7 @@ class GeoScoreETL(ExtractTransformLoad):
|
||||||
columns={self.TARGET_SCORE_SHORT_FIELD: self.TARGET_SCORE_RENAME_TO}
|
columns={self.TARGET_SCORE_SHORT_FIELD: self.TARGET_SCORE_RENAME_TO}
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info("Converting to geojson into tracts")
|
logger.info("Converting geojson into geodf with tracts")
|
||||||
usa_tracts = gpd.GeoDataFrame(
|
usa_tracts = gpd.GeoDataFrame(
|
||||||
usa_tracts,
|
usa_tracts,
|
||||||
columns=[
|
columns=[
|
||||||
|
@ -270,8 +279,10 @@ class GeoScoreETL(ExtractTransformLoad):
|
||||||
# Create separate threads to run each write to disk.
|
# Create separate threads to run each write to disk.
|
||||||
def write_high_to_file():
|
def write_high_to_file():
|
||||||
logger.info("Writing usa-high (~9 minutes)")
|
logger.info("Writing usa-high (~9 minutes)")
|
||||||
|
|
||||||
self.geojson_score_usa_high.to_file(
|
self.geojson_score_usa_high.to_file(
|
||||||
filename=self.SCORE_HIGH_GEOJSON, driver="GeoJSON"
|
filename=self.SCORE_HIGH_GEOJSON,
|
||||||
|
driver="GeoJSON",
|
||||||
)
|
)
|
||||||
logger.info("Completed writing usa-high")
|
logger.info("Completed writing usa-high")
|
||||||
|
|
||||||
|
@ -294,7 +305,6 @@ class GeoScoreETL(ExtractTransformLoad):
|
||||||
pd.Series(codebook)
|
pd.Series(codebook)
|
||||||
.reset_index()
|
.reset_index()
|
||||||
.rename(
|
.rename(
|
||||||
# kept as strings because no downstream impacts
|
|
||||||
columns={
|
columns={
|
||||||
0: internal_column_name_field,
|
0: internal_column_name_field,
|
||||||
"index": shapefile_column_field,
|
"index": shapefile_column_field,
|
||||||
|
|
|
@ -1,35 +1,29 @@
|
||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
from numpy import float64
|
from pathlib import Path
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from data_pipeline.content.schemas.download_schemas import (
|
from data_pipeline.content.schemas.download_schemas import CodebookConfig
|
||||||
CSVConfig,
|
from data_pipeline.content.schemas.download_schemas import CSVConfig
|
||||||
CodebookConfig,
|
from data_pipeline.content.schemas.download_schemas import ExcelConfig
|
||||||
ExcelConfig,
|
|
||||||
)
|
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.etl.score.etl_utils import floor_series, create_codebook
|
from data_pipeline.etl.score.etl_utils import create_codebook
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.etl.score.etl_utils import floor_series
|
||||||
get_module_logger,
|
from data_pipeline.etl.sources.census.etl_utils import check_census_data_source
|
||||||
zip_files,
|
|
||||||
load_yaml_dict_from_file,
|
|
||||||
column_list_from_yaml_object_fields,
|
|
||||||
load_dict_from_yaml_object_fields,
|
|
||||||
)
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import column_list_from_yaml_object_fields
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import load_dict_from_yaml_object_fields
|
||||||
|
from data_pipeline.utils import load_yaml_dict_from_file
|
||||||
|
from data_pipeline.utils import zip_files
|
||||||
|
from numpy import float64
|
||||||
|
|
||||||
|
|
||||||
from data_pipeline.etl.sources.census.etl_utils import (
|
|
||||||
check_census_data_source,
|
|
||||||
)
|
|
||||||
from . import constants
|
from . import constants
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
# Define the DAC variable
|
# Define the DAC variable
|
||||||
DISADVANTAGED_COMMUNITIES_FIELD = field_names.SCORE_M_COMMUNITIES
|
DISADVANTAGED_COMMUNITIES_FIELD = field_names.SCORE_N_COMMUNITIES
|
||||||
|
|
||||||
|
|
||||||
class PostScoreETL(ExtractTransformLoad):
|
class PostScoreETL(ExtractTransformLoad):
|
||||||
|
@ -45,7 +39,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
self.input_counties_df: pd.DataFrame
|
self.input_counties_df: pd.DataFrame
|
||||||
self.input_states_df: pd.DataFrame
|
self.input_states_df: pd.DataFrame
|
||||||
self.input_score_df: pd.DataFrame
|
self.input_score_df: pd.DataFrame
|
||||||
self.input_national_tract_df: pd.DataFrame
|
|
||||||
|
|
||||||
self.output_score_county_state_merged_df: pd.DataFrame
|
self.output_score_county_state_merged_df: pd.DataFrame
|
||||||
self.output_score_tiles_df: pd.DataFrame
|
self.output_score_tiles_df: pd.DataFrame
|
||||||
|
@ -92,7 +85,9 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
def _extract_score(self, score_path: Path) -> pd.DataFrame:
|
def _extract_score(self, score_path: Path) -> pd.DataFrame:
|
||||||
logger.info("Reading Score CSV")
|
logger.info("Reading Score CSV")
|
||||||
df = pd.read_csv(
|
df = pd.read_csv(
|
||||||
score_path, dtype={self.GEOID_TRACT_FIELD_NAME: "string"}
|
score_path,
|
||||||
|
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
||||||
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Convert total population to an int
|
# Convert total population to an int
|
||||||
|
@ -102,18 +97,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
|
|
||||||
return df
|
return df
|
||||||
|
|
||||||
def _extract_national_tract(
|
|
||||||
self, national_tract_path: Path
|
|
||||||
) -> pd.DataFrame:
|
|
||||||
logger.info("Reading national tract file")
|
|
||||||
return pd.read_csv(
|
|
||||||
national_tract_path,
|
|
||||||
names=[self.GEOID_TRACT_FIELD_NAME],
|
|
||||||
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
|
|
||||||
low_memory=False,
|
|
||||||
header=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
logger.info("Starting Extraction")
|
logger.info("Starting Extraction")
|
||||||
|
|
||||||
|
@ -136,9 +119,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
self.input_score_df = self._extract_score(
|
self.input_score_df = self._extract_score(
|
||||||
constants.DATA_SCORE_CSV_FULL_FILE_PATH
|
constants.DATA_SCORE_CSV_FULL_FILE_PATH
|
||||||
)
|
)
|
||||||
self.input_national_tract_df = self._extract_national_tract(
|
|
||||||
constants.DATA_CENSUS_CSV_FILE_PATH
|
|
||||||
)
|
|
||||||
|
|
||||||
def _transform_counties(
|
def _transform_counties(
|
||||||
self, initial_counties_df: pd.DataFrame
|
self, initial_counties_df: pd.DataFrame
|
||||||
|
@ -185,7 +165,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
|
|
||||||
def _create_score_data(
|
def _create_score_data(
|
||||||
self,
|
self,
|
||||||
national_tract_df: pd.DataFrame,
|
|
||||||
counties_df: pd.DataFrame,
|
counties_df: pd.DataFrame,
|
||||||
states_df: pd.DataFrame,
|
states_df: pd.DataFrame,
|
||||||
score_df: pd.DataFrame,
|
score_df: pd.DataFrame,
|
||||||
|
@ -217,28 +196,11 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
right_on=self.STATE_CODE_COLUMN,
|
right_on=self.STATE_CODE_COLUMN,
|
||||||
how="left",
|
how="left",
|
||||||
)
|
)
|
||||||
|
assert score_county_merged[
|
||||||
# check if there are census tracts without score
|
self.GEOID_TRACT_FIELD_NAME
|
||||||
logger.info("Removing tract rows without score")
|
].is_unique, "Merging state/county data introduced duplicate rows"
|
||||||
|
|
||||||
# merge census tracts with score
|
|
||||||
merged_df = national_tract_df.merge(
|
|
||||||
score_county_state_merged,
|
|
||||||
on=self.GEOID_TRACT_FIELD_NAME,
|
|
||||||
how="left",
|
|
||||||
)
|
|
||||||
|
|
||||||
# recast population to integer
|
|
||||||
score_county_state_merged["Total population"] = (
|
|
||||||
merged_df["Total population"].fillna(0).astype(int)
|
|
||||||
)
|
|
||||||
|
|
||||||
de_duplicated_df = merged_df.dropna(
|
|
||||||
subset=[DISADVANTAGED_COMMUNITIES_FIELD]
|
|
||||||
)
|
|
||||||
|
|
||||||
# set the score to the new df
|
# set the score to the new df
|
||||||
return de_duplicated_df
|
return score_county_state_merged
|
||||||
|
|
||||||
def _create_tile_data(
|
def _create_tile_data(
|
||||||
self,
|
self,
|
||||||
|
@ -254,8 +216,8 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
tiles_score_column_titles
|
tiles_score_column_titles
|
||||||
].copy()
|
].copy()
|
||||||
|
|
||||||
# Currently, we do not want USVI or Guam on the map, so this will drop all
|
# We may not want some states/territories on the map, so this will drop all
|
||||||
# rows with the FIPS codes (first two digits of the census tract)
|
# rows with those FIPS codes (first two digits of the census tract)
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Dropping specified FIPS codes from tile data: {constants.DROP_FIPS_CODES}"
|
f"Dropping specified FIPS codes from tile data: {constants.DROP_FIPS_CODES}"
|
||||||
)
|
)
|
||||||
|
@ -269,16 +231,15 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
score_tiles = score_tiles[
|
score_tiles = score_tiles[
|
||||||
~score_tiles[field_names.GEOID_TRACT_FIELD].isin(tracts_to_drop)
|
~score_tiles[field_names.GEOID_TRACT_FIELD].isin(tracts_to_drop)
|
||||||
]
|
]
|
||||||
|
float_cols = [
|
||||||
score_tiles[constants.TILES_SCORE_FLOAT_COLUMNS] = score_tiles[
|
col
|
||||||
constants.TILES_SCORE_FLOAT_COLUMNS
|
for col, col_dtype in score_tiles.dtypes.items()
|
||||||
].apply(
|
if col_dtype == np.dtype("float64")
|
||||||
func=lambda series: floor_series(
|
]
|
||||||
series=series,
|
scale_factor = 10**constants.TILES_ROUND_NUM_DECIMALS
|
||||||
number_of_decimals=constants.TILES_ROUND_NUM_DECIMALS,
|
score_tiles[float_cols] = (
|
||||||
),
|
score_tiles[float_cols] * scale_factor
|
||||||
axis=0,
|
).apply(np.floor) / scale_factor
|
||||||
)
|
|
||||||
|
|
||||||
logger.info("Adding fields for island areas and Puerto Rico")
|
logger.info("Adding fields for island areas and Puerto Rico")
|
||||||
# The below operation constructs variables for the front end.
|
# The below operation constructs variables for the front end.
|
||||||
|
@ -427,7 +388,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
transformed_score = self._transform_score(self.input_score_df)
|
transformed_score = self._transform_score(self.input_score_df)
|
||||||
|
|
||||||
output_score_county_state_merged_df = self._create_score_data(
|
output_score_county_state_merged_df = self._create_score_data(
|
||||||
self.input_national_tract_df,
|
|
||||||
transformed_counties,
|
transformed_counties,
|
||||||
transformed_states,
|
transformed_states,
|
||||||
transformed_score,
|
transformed_score,
|
||||||
|
@ -521,8 +481,6 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
score_tiles_df.to_csv(tile_score_path, index=False, encoding="utf-8")
|
score_tiles_df.to_csv(tile_score_path, index=False, encoding="utf-8")
|
||||||
|
|
||||||
def _load_downloadable_zip(self, downloadable_info_path: Path) -> None:
|
def _load_downloadable_zip(self, downloadable_info_path: Path) -> None:
|
||||||
logger.info("Saving Downloadable CSV")
|
|
||||||
|
|
||||||
downloadable_info_path.mkdir(parents=True, exist_ok=True)
|
downloadable_info_path.mkdir(parents=True, exist_ok=True)
|
||||||
csv_path = constants.SCORE_DOWNLOADABLE_CSV_FILE_PATH
|
csv_path = constants.SCORE_DOWNLOADABLE_CSV_FILE_PATH
|
||||||
excel_path = constants.SCORE_DOWNLOADABLE_EXCEL_FILE_PATH
|
excel_path = constants.SCORE_DOWNLOADABLE_EXCEL_FILE_PATH
|
||||||
|
@ -583,6 +541,22 @@ class PostScoreETL(ExtractTransformLoad):
|
||||||
"fields"
|
"fields"
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
assert codebook_df["csv_label"].equals(codebook_df["excel_label"]), (
|
||||||
|
"CSV and Excel differ. If that's intentional, "
|
||||||
|
"remove this assertion. Otherwise, fix it."
|
||||||
|
)
|
||||||
|
# Check the codebook to make sure it matches the download files
|
||||||
|
assert not set(codebook_df["csv_label"].dropna()).difference(
|
||||||
|
downloadable_df.columns
|
||||||
|
), "Codebook is missing columns from downloadable files"
|
||||||
|
assert (
|
||||||
|
len(
|
||||||
|
downloadable_df.columns.difference(
|
||||||
|
set(codebook_df["csv_label"])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
== 0
|
||||||
|
), "Codebook has columns the downloadable files do not"
|
||||||
|
|
||||||
# load codebook to disk
|
# load codebook to disk
|
||||||
codebook_df.to_csv(codebook_path, index=False)
|
codebook_df.to_csv(codebook_path, index=False)
|
||||||
|
|
|
@ -1,16 +1,21 @@
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
import typing
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.etl.score.constants import TILES_ALASKA_AND_HAWAII_FIPS_CODE
|
||||||
download_file_from_url,
|
from data_pipeline.etl.score.constants import TILES_CONTINENTAL_US_FIPS_CODE
|
||||||
get_module_logger,
|
from data_pipeline.etl.score.constants import TILES_ISLAND_AREA_FIPS_CODES
|
||||||
)
|
from data_pipeline.etl.score.constants import TILES_PUERTO_RICO_FIPS_CODE
|
||||||
|
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import download_file_from_url
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
from . import constants
|
from . import constants
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
@ -91,7 +96,7 @@ def floor_series(series: pd.Series, number_of_decimals: int) -> pd.Series:
|
||||||
if series.isin(unacceptable_values).any():
|
if series.isin(unacceptable_values).any():
|
||||||
series.replace(mapping, regex=False, inplace=True)
|
series.replace(mapping, regex=False, inplace=True)
|
||||||
|
|
||||||
multiplication_factor = 10 ** number_of_decimals
|
multiplication_factor = 10**number_of_decimals
|
||||||
|
|
||||||
# In order to safely cast NaNs
|
# In order to safely cast NaNs
|
||||||
# First coerce series to float type: series.astype(float)
|
# First coerce series to float type: series.astype(float)
|
||||||
|
@ -305,3 +310,106 @@ def create_codebook(
|
||||||
return merged_codebook_df[constants.CODEBOOK_COLUMNS].rename(
|
return merged_codebook_df[constants.CODEBOOK_COLUMNS].rename(
|
||||||
columns={constants.CEJST_SCORE_COLUMN_NAME: "Description"}
|
columns={constants.CEJST_SCORE_COLUMN_NAME: "Description"}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# pylint: disable=too-many-arguments
|
||||||
|
def compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes: typing.List[str],
|
||||||
|
continental_us_expected: bool = True,
|
||||||
|
alaska_and_hawaii_expected: bool = True,
|
||||||
|
puerto_rico_expected: bool = True,
|
||||||
|
island_areas_expected: bool = True,
|
||||||
|
additional_fips_codes_not_expected: typing.List[str] = None,
|
||||||
|
dataset_name: str = None,
|
||||||
|
) -> None:
|
||||||
|
"""Check whether a list of state/territory FIPS codes match expectations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
actual_state_fips_codes (List of str): Actual state codes observed in data
|
||||||
|
continental_us_expected (bool, optional): Do you expect the continental nation
|
||||||
|
(DC & states except for Alaska and Hawaii) to be represented in data?
|
||||||
|
alaska_and_hawaii_expected (bool, optional): Do you expect Alaska and Hawaii
|
||||||
|
to be represented in the data? Note: if only *1* of Alaska and Hawaii are
|
||||||
|
not expected to be included, do not use this argument -- instead,
|
||||||
|
use `additional_fips_codes_not_expected` for the 1 state you expected to
|
||||||
|
be missing.
|
||||||
|
puerto_rico_expected (bool, optional): Do you expect PR to be represented in data?
|
||||||
|
island_areas_expected (bool, optional): Do you expect Island Areas to be represented in
|
||||||
|
data?
|
||||||
|
additional_fips_codes_not_expected (List of str, optional): Additional state codes
|
||||||
|
not expected in the data. For example, the data may be known to be missing
|
||||||
|
data from Maine and Wisconsin.
|
||||||
|
dataset_name (str, optional): The name of the data set, used only in printing an
|
||||||
|
error message. (This is helpful for debugging during parallel etl runs.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
None: Does not return any values.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: if lists do not match expectations.
|
||||||
|
"""
|
||||||
|
# Setting default argument of [] here to avoid mutability problems.
|
||||||
|
if additional_fips_codes_not_expected is None:
|
||||||
|
additional_fips_codes_not_expected = []
|
||||||
|
|
||||||
|
# Cast input to a set.
|
||||||
|
actual_state_fips_codes_set = set(actual_state_fips_codes)
|
||||||
|
|
||||||
|
# Start with the list of all FIPS codes for all states and territories.
|
||||||
|
expected_states_set = set(get_state_fips_codes(settings.DATA_PATH))
|
||||||
|
|
||||||
|
# If continental US is not expected to be included, remove it from the
|
||||||
|
# expected states set.
|
||||||
|
if not continental_us_expected:
|
||||||
|
expected_states_set = expected_states_set.difference(
|
||||||
|
TILES_CONTINENTAL_US_FIPS_CODE
|
||||||
|
)
|
||||||
|
|
||||||
|
# If both Alaska and Hawaii are not expected to be included, remove them from the
|
||||||
|
# expected states set.
|
||||||
|
# Note: if only *1* of Alaska and Hawaii are not expected to be included,
|
||||||
|
# do not use this argument -- instead, use `additional_fips_codes_not_expected`
|
||||||
|
# for the 1 state you expected to be missing.
|
||||||
|
if not alaska_and_hawaii_expected:
|
||||||
|
expected_states_set = expected_states_set.difference(
|
||||||
|
TILES_ALASKA_AND_HAWAII_FIPS_CODE
|
||||||
|
)
|
||||||
|
|
||||||
|
# If Puerto Rico is not expected to be included, remove it from the expected
|
||||||
|
# states set.
|
||||||
|
if not puerto_rico_expected:
|
||||||
|
expected_states_set = expected_states_set.difference(
|
||||||
|
TILES_PUERTO_RICO_FIPS_CODE
|
||||||
|
)
|
||||||
|
|
||||||
|
# If island areas are not expected to be included, remove them from the expected
|
||||||
|
# states set.
|
||||||
|
if not island_areas_expected:
|
||||||
|
expected_states_set = expected_states_set.difference(
|
||||||
|
TILES_ISLAND_AREA_FIPS_CODES
|
||||||
|
)
|
||||||
|
|
||||||
|
# If additional FIPS codes are not expected to be included, remove them from the
|
||||||
|
# expected states set.
|
||||||
|
expected_states_set = expected_states_set.difference(
|
||||||
|
additional_fips_codes_not_expected
|
||||||
|
)
|
||||||
|
|
||||||
|
dataset_name_phrase = (
|
||||||
|
f" for dataset `{dataset_name}`" if dataset_name is not None else ""
|
||||||
|
)
|
||||||
|
|
||||||
|
if expected_states_set != actual_state_fips_codes_set:
|
||||||
|
raise ValueError(
|
||||||
|
f"The states and territories in the data{dataset_name_phrase} are not "
|
||||||
|
f"as expected.\n"
|
||||||
|
"FIPS state codes expected that are not present in the data:\n"
|
||||||
|
f"{sorted(list(expected_states_set - actual_state_fips_codes_set))}\n"
|
||||||
|
"FIPS state codes in the data that were not expected:\n"
|
||||||
|
f"{sorted(list(actual_state_fips_codes_set - expected_states_set))}\n"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.info(
|
||||||
|
"Data matches expected state and territory representation"
|
||||||
|
f"{dataset_name_phrase}."
|
||||||
|
)
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass
|
||||||
|
from dataclasses import field
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import List, Optional
|
from typing import List
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
class FieldType(Enum):
|
class FieldType(Enum):
|
||||||
|
@ -77,7 +79,7 @@ class DatasetsConfig:
|
||||||
long_name: str
|
long_name: str
|
||||||
short_name: str
|
short_name: str
|
||||||
module_name: str
|
module_name: str
|
||||||
input_geoid_tract_field_name: str
|
|
||||||
load_fields: List[LoadField]
|
load_fields: List[LoadField]
|
||||||
|
input_geoid_tract_field_name: Optional[str] = None
|
||||||
|
|
||||||
datasets: List[Dataset]
|
datasets: List[Dataset]
|
||||||
|
|
|
@ -5,7 +5,8 @@ from pathlib import Path
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
import pytest
|
import pytest
|
||||||
from data_pipeline import config
|
from data_pipeline import config
|
||||||
from data_pipeline.etl.score import etl_score_post, tests
|
from data_pipeline.etl.score import etl_score_post
|
||||||
|
from data_pipeline.etl.score import tests
|
||||||
from data_pipeline.etl.score.etl_score_post import PostScoreETL
|
from data_pipeline.etl.score.etl_score_post import PostScoreETL
|
||||||
|
|
||||||
|
|
||||||
|
|
File diff suppressed because one or more lines are too long
|
@ -1,4 +1,4 @@
|
||||||
fips,state_name,state_abbreviation,region,division
|
fips,state_name,state_abbreviation,region,division
|
||||||
01,Alabama,AL,South,East South Central
|
01,Alabama,AL,South,East South Central
|
||||||
02,Alaska,AK,West,Pacific
|
02,Alaska,AK,West,Pacific
|
||||||
04,Arizona,AZ,West,Mountain
|
04,Arizona,AZ,West,Mountain
|
||||||
|
|
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -1,7 +1,9 @@
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
import pytest
|
import pytest
|
||||||
|
from data_pipeline.etl.score.etl_utils import (
|
||||||
|
compare_to_list_of_expected_state_fips_codes,
|
||||||
|
)
|
||||||
from data_pipeline.etl.score.etl_utils import floor_series
|
from data_pipeline.etl.score.etl_utils import floor_series
|
||||||
|
|
||||||
|
|
||||||
|
@ -70,3 +72,181 @@ def test_floor_series():
|
||||||
match="Argument series must be of type pandas series, not of type list.",
|
match="Argument series must be of type pandas series, not of type list.",
|
||||||
):
|
):
|
||||||
floor_series(invalid_type, number_of_decimals=3)
|
floor_series(invalid_type, number_of_decimals=3)
|
||||||
|
|
||||||
|
|
||||||
|
def test_compare_to_list_of_expected_state_fips_codes():
|
||||||
|
# Has every state/territory/DC code
|
||||||
|
fips_codes_test_1 = [
|
||||||
|
"01",
|
||||||
|
"02",
|
||||||
|
"04",
|
||||||
|
"05",
|
||||||
|
"06",
|
||||||
|
"08",
|
||||||
|
"09",
|
||||||
|
"10",
|
||||||
|
"11",
|
||||||
|
"12",
|
||||||
|
"13",
|
||||||
|
"15",
|
||||||
|
"16",
|
||||||
|
"17",
|
||||||
|
"18",
|
||||||
|
"19",
|
||||||
|
"20",
|
||||||
|
"21",
|
||||||
|
"22",
|
||||||
|
"23",
|
||||||
|
"24",
|
||||||
|
"25",
|
||||||
|
"26",
|
||||||
|
"27",
|
||||||
|
"28",
|
||||||
|
"29",
|
||||||
|
"30",
|
||||||
|
"31",
|
||||||
|
"32",
|
||||||
|
"33",
|
||||||
|
"34",
|
||||||
|
"35",
|
||||||
|
"36",
|
||||||
|
"37",
|
||||||
|
"38",
|
||||||
|
"39",
|
||||||
|
"40",
|
||||||
|
"41",
|
||||||
|
"42",
|
||||||
|
"44",
|
||||||
|
"45",
|
||||||
|
"46",
|
||||||
|
"47",
|
||||||
|
"48",
|
||||||
|
"49",
|
||||||
|
"50",
|
||||||
|
"51",
|
||||||
|
"53",
|
||||||
|
"54",
|
||||||
|
"55",
|
||||||
|
"56",
|
||||||
|
"60",
|
||||||
|
"66",
|
||||||
|
"69",
|
||||||
|
"72",
|
||||||
|
"78",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Should not raise any errors
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_1
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should raise error because Puerto Rico is not expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_1,
|
||||||
|
puerto_rico_expected=False,
|
||||||
|
)
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes in the data that were not expected:\n['72']\n"
|
||||||
|
)
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# Should raise error because Island Areas are not expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_1,
|
||||||
|
island_areas_expected=False,
|
||||||
|
)
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes in the data that were not expected:\n"
|
||||||
|
"['60', '66', '69', '78']\n"
|
||||||
|
)
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# List missing PR and Guam
|
||||||
|
fips_codes_test_2 = [x for x in fips_codes_test_1 if x not in ["66", "72"]]
|
||||||
|
|
||||||
|
# Should raise error because all Island Areas and PR are expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_2,
|
||||||
|
)
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes expected that are not present in the data:\n"
|
||||||
|
"['66', '72']\n"
|
||||||
|
)
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# Missing Maine and Wisconsin
|
||||||
|
fips_codes_test_3 = [x for x in fips_codes_test_1 if x not in ["23", "55"]]
|
||||||
|
|
||||||
|
# Should raise error because Maine and Wisconsin are expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_3,
|
||||||
|
)
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes expected that are not present in the data:\n"
|
||||||
|
"['23', '55']\n"
|
||||||
|
)
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# Should not raise error because Maine and Wisconsin are expected to be missing
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_3,
|
||||||
|
additional_fips_codes_not_expected=["23", "55"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Missing the continental & AK/HI nation
|
||||||
|
fips_codes_test_4 = [
|
||||||
|
"60",
|
||||||
|
"66",
|
||||||
|
"69",
|
||||||
|
"72",
|
||||||
|
"78",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Should raise error because the nation is expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_4,
|
||||||
|
)
|
||||||
|
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes expected that are not present in the data:\n"
|
||||||
|
"['01', '02', '04', '05', '06', '08', '09', '10', '11', '12', '13', '15', '16', "
|
||||||
|
"'17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', "
|
||||||
|
"'30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', "
|
||||||
|
"'44', '45', '46', '47', '48', '49', '50', '51', '53', '54', '55', '56']"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# Should not raise error because continental US and AK/HI is not to be missing
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_4,
|
||||||
|
continental_us_expected=False,
|
||||||
|
alaska_and_hawaii_expected=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Missing Hawaii but not Alaska
|
||||||
|
fips_codes_test_5 = [x for x in fips_codes_test_1 if x not in ["15"]]
|
||||||
|
|
||||||
|
# Should raise error because both Hawaii and Alaska are expected
|
||||||
|
with pytest.raises(ValueError) as exception_info:
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_5,
|
||||||
|
alaska_and_hawaii_expected=True,
|
||||||
|
)
|
||||||
|
partial_expected_error_message = (
|
||||||
|
"FIPS state codes expected that are not present in the data:\n"
|
||||||
|
"['15']\n"
|
||||||
|
)
|
||||||
|
assert partial_expected_error_message in str(exception_info.value)
|
||||||
|
|
||||||
|
# Should work as expected
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=fips_codes_test_5,
|
||||||
|
alaska_and_hawaii_expected=True,
|
||||||
|
additional_fips_codes_not_expected=["15"],
|
||||||
|
)
|
||||||
|
|
|
@ -1,14 +1,11 @@
|
||||||
# pylint: disable=W0212
|
# pylint: disable=W0212
|
||||||
## Above disables warning about access to underscore-prefixed methods
|
## Above disables warning about access to underscore-prefixed methods
|
||||||
|
|
||||||
from importlib import reload
|
from importlib import reload
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import pandas.api.types as ptypes
|
import pandas.api.types as ptypes
|
||||||
import pandas.testing as pdt
|
import pandas.testing as pdt
|
||||||
from data_pipeline.content.schemas.download_schemas import (
|
from data_pipeline.content.schemas.download_schemas import CSVConfig
|
||||||
CSVConfig,
|
|
||||||
)
|
|
||||||
|
|
||||||
from data_pipeline.etl.score import constants
|
from data_pipeline.etl.score import constants
|
||||||
from data_pipeline.utils import load_yaml_dict_from_file
|
from data_pipeline.utils import load_yaml_dict_from_file
|
||||||
|
|
||||||
|
@ -67,14 +64,12 @@ def test_transform_score(etl, score_data_initial, score_transformed_expected):
|
||||||
# pylint: disable=too-many-arguments
|
# pylint: disable=too-many-arguments
|
||||||
def test_create_score_data(
|
def test_create_score_data(
|
||||||
etl,
|
etl,
|
||||||
national_tract_df,
|
|
||||||
counties_transformed_expected,
|
counties_transformed_expected,
|
||||||
states_transformed_expected,
|
states_transformed_expected,
|
||||||
score_transformed_expected,
|
score_transformed_expected,
|
||||||
score_data_expected,
|
score_data_expected,
|
||||||
):
|
):
|
||||||
score_data_actual = etl._create_score_data(
|
score_data_actual = etl._create_score_data(
|
||||||
national_tract_df,
|
|
||||||
counties_transformed_expected,
|
counties_transformed_expected,
|
||||||
states_transformed_expected,
|
states_transformed_expected,
|
||||||
score_transformed_expected,
|
score_transformed_expected,
|
||||||
|
|
|
@ -1,8 +1,7 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
from data_pipeline.config import settings
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
|
@ -1,58 +1,148 @@
|
||||||
|
import pathlib
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger, download_file_from_url
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.etl.score.etl_utils import (
|
||||||
|
compare_to_list_of_expected_state_fips_codes,
|
||||||
|
)
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import download_file_from_url
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class CDCLifeExpectancy(ExtractTransformLoad):
|
class CDCLifeExpectancy(ExtractTransformLoad):
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
|
||||||
|
NAME = "cdc_life_expectancy"
|
||||||
|
|
||||||
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
USA_FILE_URL = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/US_A.CSV"
|
||||||
|
else:
|
||||||
|
USA_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/US_A.CSV"
|
||||||
|
|
||||||
|
LOAD_YAML_CONFIG: bool = False
|
||||||
|
LIFE_EXPECTANCY_FIELD_NAME = "Life expectancy (years)"
|
||||||
|
INPUT_GEOID_TRACT_FIELD_NAME = "Tract ID"
|
||||||
|
|
||||||
|
STATES_MISSING_FROM_USA_FILE = ["23", "55"]
|
||||||
|
|
||||||
|
# For some reason, LEEP does not include Maine or Wisconsin in its "All of
|
||||||
|
# USA" file. Load these separately.
|
||||||
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
WISCONSIN_FILE_URL: str = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/WI_A.CSV"
|
||||||
|
MAINE_FILE_URL: str = f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/cdc_file_expectancy/ME_A.CSV"
|
||||||
|
else:
|
||||||
|
WISCONSIN_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/WI_A.CSV"
|
||||||
|
MAINE_FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/ME_A.CSV"
|
||||||
|
|
||||||
|
TRACT_INPUT_COLUMN_NAME = "Tract ID"
|
||||||
|
STATE_INPUT_COLUMN_NAME = "STATE2KX"
|
||||||
|
|
||||||
|
raw_df: pd.DataFrame
|
||||||
|
output_df: pd.DataFrame
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.FILE_URL: str = "https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Datasets/NVSS/USALEEP/CSV/US_A.CSV"
|
|
||||||
self.OUTPUT_PATH: Path = (
|
self.OUTPUT_PATH: Path = (
|
||||||
self.DATA_PATH / "dataset" / "cdc_life_expectancy"
|
self.DATA_PATH / "dataset" / "cdc_life_expectancy"
|
||||||
)
|
)
|
||||||
|
|
||||||
self.TRACT_INPUT_COLUMN_NAME = "Tract ID"
|
|
||||||
self.LIFE_EXPECTANCY_FIELD_NAME = "Life expectancy (years)"
|
|
||||||
|
|
||||||
# Constants for output
|
# Constants for output
|
||||||
self.COLUMNS_TO_KEEP = [
|
self.COLUMNS_TO_KEEP = [
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
self.LIFE_EXPECTANCY_FIELD_NAME,
|
field_names.LIFE_EXPECTANCY_FIELD,
|
||||||
]
|
]
|
||||||
|
|
||||||
self.raw_df: pd.DataFrame
|
def _download_and_prep_data(
|
||||||
self.output_df: pd.DataFrame
|
self, file_url: str, download_file_name: pathlib.Path
|
||||||
|
) -> pd.DataFrame:
|
||||||
def extract(self) -> None:
|
|
||||||
logger.info("Starting data download.")
|
|
||||||
|
|
||||||
download_file_name = (
|
|
||||||
self.get_tmp_path() / "cdc_life_expectancy" / "usa.csv"
|
|
||||||
)
|
|
||||||
download_file_from_url(
|
download_file_from_url(
|
||||||
file_url=self.FILE_URL,
|
file_url=file_url,
|
||||||
download_file_name=download_file_name,
|
download_file_name=download_file_name,
|
||||||
verify=True,
|
verify=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
self.raw_df = pd.read_csv(
|
df = pd.read_csv(
|
||||||
filepath_or_buffer=download_file_name,
|
filepath_or_buffer=download_file_name,
|
||||||
dtype={
|
dtype={
|
||||||
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
||||||
self.TRACT_INPUT_COLUMN_NAME: "string",
|
self.TRACT_INPUT_COLUMN_NAME: "string",
|
||||||
|
self.STATE_INPUT_COLUMN_NAME: "string",
|
||||||
},
|
},
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
return df
|
||||||
|
|
||||||
|
def extract(self) -> None:
|
||||||
|
logger.info("Starting data download.")
|
||||||
|
|
||||||
|
all_usa_raw_df = self._download_and_prep_data(
|
||||||
|
file_url=self.USA_FILE_URL,
|
||||||
|
download_file_name=self.get_tmp_path() / "US_A.CSV",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check which states are missing
|
||||||
|
states_in_life_expectancy_usa_file = list(
|
||||||
|
all_usa_raw_df[self.STATE_INPUT_COLUMN_NAME].unique()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Expect that PR, Island Areas, and Maine/Wisconsin are missing
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=states_in_life_expectancy_usa_file,
|
||||||
|
continental_us_expected=self.CONTINENTAL_US_EXPECTED_IN_DATA,
|
||||||
|
puerto_rico_expected=self.PUERTO_RICO_EXPECTED_IN_DATA,
|
||||||
|
island_areas_expected=self.ISLAND_AREAS_EXPECTED_IN_DATA,
|
||||||
|
additional_fips_codes_not_expected=self.STATES_MISSING_FROM_USA_FILE,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Downloading data for Maine")
|
||||||
|
maine_raw_df = self._download_and_prep_data(
|
||||||
|
file_url=self.MAINE_FILE_URL,
|
||||||
|
download_file_name=self.get_tmp_path() / "maine.csv",
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Downloading data for Wisconsin")
|
||||||
|
wisconsin_raw_df = self._download_and_prep_data(
|
||||||
|
file_url=self.WISCONSIN_FILE_URL,
|
||||||
|
download_file_name=self.get_tmp_path() / "wisconsin.csv",
|
||||||
|
)
|
||||||
|
|
||||||
|
combined_df = pd.concat(
|
||||||
|
objs=[all_usa_raw_df, maine_raw_df, wisconsin_raw_df],
|
||||||
|
ignore_index=True,
|
||||||
|
verify_integrity=True,
|
||||||
|
axis=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
states_in_combined_df = list(
|
||||||
|
combined_df[self.STATE_INPUT_COLUMN_NAME].unique()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Expect that PR and Island Areas are the only things now missing
|
||||||
|
compare_to_list_of_expected_state_fips_codes(
|
||||||
|
actual_state_fips_codes=states_in_combined_df,
|
||||||
|
continental_us_expected=self.CONTINENTAL_US_EXPECTED_IN_DATA,
|
||||||
|
puerto_rico_expected=self.PUERTO_RICO_EXPECTED_IN_DATA,
|
||||||
|
island_areas_expected=self.ISLAND_AREAS_EXPECTED_IN_DATA,
|
||||||
|
additional_fips_codes_not_expected=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save the updated version
|
||||||
|
self.raw_df = combined_df
|
||||||
|
|
||||||
def transform(self) -> None:
|
def transform(self) -> None:
|
||||||
logger.info("Starting DOE energy burden transform.")
|
logger.info("Starting CDC life expectancy transform.")
|
||||||
|
|
||||||
self.output_df = self.raw_df.rename(
|
self.output_df = self.raw_df.rename(
|
||||||
columns={
|
columns={
|
||||||
"e(0)": self.LIFE_EXPECTANCY_FIELD_NAME,
|
"e(0)": field_names.LIFE_EXPECTANCY_FIELD,
|
||||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,20 +1,45 @@
|
||||||
import pandas as pd
|
import typing
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger, download_file_from_url
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import download_file_from_url
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class CDCPlacesETL(ExtractTransformLoad):
|
class CDCPlacesETL(ExtractTransformLoad):
|
||||||
|
NAME = "cdc_places"
|
||||||
|
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
|
||||||
|
CDC_GEOID_FIELD_NAME = "LocationID"
|
||||||
|
CDC_VALUE_FIELD_NAME = "Data_Value"
|
||||||
|
CDC_MEASURE_FIELD_NAME = "Measure"
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_places"
|
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_places"
|
||||||
|
|
||||||
self.CDC_PLACES_URL = "https://chronicdata.cdc.gov/api/views/cwsq-ngmh/rows.csv?accessType=DOWNLOAD"
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
self.CDC_GEOID_FIELD_NAME = "LocationID"
|
self.CDC_PLACES_URL = (
|
||||||
self.CDC_VALUE_FIELD_NAME = "Data_Value"
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
self.CDC_MEASURE_FIELD_NAME = "Measure"
|
"cdc_places/PLACES__Local_Data_for_Better_Health__Census_Tract_Data_2021_release.csv"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.CDC_PLACES_URL = "https://chronicdata.cdc.gov/api/views/cwsq-ngmh/rows.csv?accessType=DOWNLOAD"
|
||||||
|
|
||||||
|
self.COLUMNS_TO_KEEP: typing.List[str] = [
|
||||||
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
field_names.DIABETES_FIELD,
|
||||||
|
field_names.ASTHMA_FIELD,
|
||||||
|
field_names.HEART_DISEASE_FIELD,
|
||||||
|
field_names.CANCER_FIELD,
|
||||||
|
field_names.HEALTH_INSURANCE_FIELD,
|
||||||
|
field_names.PHYS_HEALTH_NOT_GOOD_FIELD,
|
||||||
|
]
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
@ -22,9 +47,7 @@ class CDCPlacesETL(ExtractTransformLoad):
|
||||||
logger.info("Starting to download 520MB CDC Places file.")
|
logger.info("Starting to download 520MB CDC Places file.")
|
||||||
file_path = download_file_from_url(
|
file_path = download_file_from_url(
|
||||||
file_url=self.CDC_PLACES_URL,
|
file_url=self.CDC_PLACES_URL,
|
||||||
download_file_name=self.get_tmp_path()
|
download_file_name=self.get_tmp_path() / "census_tract.csv",
|
||||||
/ "cdc_places"
|
|
||||||
/ "census_tract.csv",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
self.df = pd.read_csv(
|
self.df = pd.read_csv(
|
||||||
|
@ -42,7 +65,6 @@ class CDCPlacesETL(ExtractTransformLoad):
|
||||||
inplace=True,
|
inplace=True,
|
||||||
errors="raise",
|
errors="raise",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Note: Puerto Rico not included.
|
# Note: Puerto Rico not included.
|
||||||
self.df = self.df.pivot(
|
self.df = self.df.pivot(
|
||||||
index=self.GEOID_TRACT_FIELD_NAME,
|
index=self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
@ -65,12 +87,4 @@ class CDCPlacesETL(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
# Make the index (the census tract ID) a column, not the index.
|
# Make the index (the census tract ID) a column, not the index.
|
||||||
self.df.reset_index(inplace=True)
|
self.output_df = self.df.reset_index()
|
||||||
|
|
||||||
def load(self) -> None:
|
|
||||||
logger.info("Saving CDC Places Data")
|
|
||||||
|
|
||||||
# mkdir census
|
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
|
||||||
|
|
|
@ -53,7 +53,7 @@ For SVI 2018, the authors also included two adjunct variables, 1) 2014-2018 ACS
|
||||||
|
|
||||||
**Important Notes**
|
**Important Notes**
|
||||||
|
|
||||||
1. Tracts with zero estimates for the total population (N = 645 for the U.S.) were removed during the ranking process. These tracts were added back to the SVI databases after ranking.
|
1. Tracts with zero estimates for the total population (N = 645 for the U.S.) were removed during the ranking process. These tracts were added back to the SVI databases after ranking.
|
||||||
|
|
||||||
2. The TOTPOP field value is 0, but the percentile ranking fields (RPL_THEME1, RPL_THEME2, RPL_THEME3, RPL_THEME4, and RPL_THEMES) were set to -999.
|
2. The TOTPOP field value is 0, but the percentile ranking fields (RPL_THEME1, RPL_THEME2, RPL_THEME3, RPL_THEME4, and RPL_THEMES) were set to -999.
|
||||||
|
|
||||||
|
@ -66,4 +66,4 @@ here: https://www.census.gov/programs-surveys/acs/data/variance-tables.html.
|
||||||
|
|
||||||
For selected ACS 5-year Detailed Tables, “Users can calculate margins of error for aggregated data by using the variance replicates. Unlike available approximation formulas, this method results in an exact margin of error by using the covariance term.”
|
For selected ACS 5-year Detailed Tables, “Users can calculate margins of error for aggregated data by using the variance replicates. Unlike available approximation formulas, this method results in an exact margin of error by using the covariance term.”
|
||||||
|
|
||||||
MOEs are _not_ included nor considered during this data processing nor for the scoring comparison tool.
|
MOEs are _not_ included nor considered during this data processing nor for the scoring comparison tool.
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -17,7 +17,13 @@ class CDCSVIIndex(ExtractTransformLoad):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_svi_index"
|
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "cdc_svi_index"
|
||||||
|
|
||||||
self.CDC_SVI_INDEX_URL = "https://svi.cdc.gov/Documents/Data/2018_SVI_Data/CSV/SVI2018_US.csv"
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
self.CDC_SVI_INDEX_URL = (
|
||||||
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
|
"cdc_svi_index/SVI2018_US.csv"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.CDC_SVI_INDEX_URL = "https://svi.cdc.gov/Documents/Data/2018_SVI_Data/CSV/SVI2018_US.csv"
|
||||||
|
|
||||||
self.CDC_RPL_THEMES_THRESHOLD = 0.90
|
self.CDC_RPL_THEMES_THRESHOLD = 0.90
|
||||||
|
|
||||||
|
|
|
@ -3,12 +3,12 @@ import json
|
||||||
import subprocess
|
import subprocess
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import geopandas as gpd
|
import geopandas as gpd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
|
||||||
|
|
||||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -20,19 +20,20 @@ class GeoFileType(Enum):
|
||||||
|
|
||||||
|
|
||||||
class CensusETL(ExtractTransformLoad):
|
class CensusETL(ExtractTransformLoad):
|
||||||
|
SHP_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "shp"
|
||||||
|
GEOJSON_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "geojson"
|
||||||
|
CSV_BASE_PATH = ExtractTransformLoad.DATA_PATH / "census" / "csv"
|
||||||
|
GEOJSON_PATH = ExtractTransformLoad.DATA_PATH / "census" / "geojson"
|
||||||
|
NATIONAL_TRACT_CSV_PATH = CSV_BASE_PATH / "us.csv"
|
||||||
|
NATIONAL_TRACT_JSON_PATH = GEOJSON_BASE_PATH / "us.json"
|
||||||
|
GEOID_TRACT_FIELD_NAME: str = "GEOID10_TRACT"
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.SHP_BASE_PATH = self.DATA_PATH / "census" / "shp"
|
|
||||||
self.GEOJSON_BASE_PATH = self.DATA_PATH / "census" / "geojson"
|
|
||||||
self.CSV_BASE_PATH = self.DATA_PATH / "census" / "csv"
|
|
||||||
# the fips_states_2010.csv is generated from data here
|
# the fips_states_2010.csv is generated from data here
|
||||||
# https://www.census.gov/geographies/reference-files/time-series/geo/tallies.html
|
# https://www.census.gov/geographies/reference-files/time-series/geo/tallies.html
|
||||||
self.STATE_FIPS_CODES = get_state_fips_codes(self.DATA_PATH)
|
self.STATE_FIPS_CODES = get_state_fips_codes(self.DATA_PATH)
|
||||||
self.GEOJSON_PATH = self.DATA_PATH / "census" / "geojson"
|
|
||||||
self.TRACT_PER_STATE: dict = {} # in-memory dict per state
|
self.TRACT_PER_STATE: dict = {} # in-memory dict per state
|
||||||
self.TRACT_NATIONAL: list = [] # in-memory global list
|
self.TRACT_NATIONAL: list = [] # in-memory global list
|
||||||
self.NATIONAL_TRACT_CSV_PATH = self.CSV_BASE_PATH / "us.csv"
|
|
||||||
self.NATIONAL_TRACT_JSON_PATH = self.GEOJSON_BASE_PATH / "us.json"
|
|
||||||
self.GEOID_TRACT_FIELD_NAME: str = "GEOID10_TRACT"
|
|
||||||
|
|
||||||
def _path_for_fips_file(
|
def _path_for_fips_file(
|
||||||
self, fips_code: str, file_type: GeoFileType
|
self, fips_code: str, file_type: GeoFileType
|
||||||
|
|
|
@ -5,13 +5,11 @@ from pathlib import Path
|
||||||
|
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.utils import get_module_logger
|
||||||
get_module_logger,
|
from data_pipeline.utils import remove_all_dirs_from_dir
|
||||||
remove_all_dirs_from_dir,
|
from data_pipeline.utils import remove_files_from_dir
|
||||||
remove_files_from_dir,
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
unzip_file_from_url,
|
from data_pipeline.utils import zip_directory
|
||||||
zip_directory,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
|
@ -1,22 +1,33 @@
|
||||||
import pandas as pd
|
import os
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
import geopandas as gpd
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.sources.census_acs.etl_imputations import (
|
||||||
|
calculate_income_measures,
|
||||||
|
)
|
||||||
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
||||||
retrieve_census_acs_data,
|
retrieve_census_acs_data,
|
||||||
)
|
)
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
# because now there is a requirement for the us.json, this will port from
|
||||||
|
# AWS when a local copy does not exist.
|
||||||
|
CENSUS_DATA_S3_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/census.zip"
|
||||||
|
|
||||||
|
|
||||||
class CensusACSETL(ExtractTransformLoad):
|
class CensusACSETL(ExtractTransformLoad):
|
||||||
def __init__(self):
|
NAME = "census_acs"
|
||||||
self.ACS_YEAR = 2019
|
ACS_YEAR = 2019
|
||||||
self.OUTPUT_PATH = (
|
MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION = 1
|
||||||
self.DATA_PATH / "dataset" / f"census_acs_{self.ACS_YEAR}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
self.TOTAL_UNEMPLOYED_FIELD = "B23025_005E"
|
self.TOTAL_UNEMPLOYED_FIELD = "B23025_005E"
|
||||||
self.TOTAL_IN_LABOR_FORCE = "B23025_003E"
|
self.TOTAL_IN_LABOR_FORCE = "B23025_003E"
|
||||||
self.EMPLOYMENT_FIELDS = [
|
self.EMPLOYMENT_FIELDS = [
|
||||||
|
@ -59,6 +70,23 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||||
"Percent of individuals < 200% Federal Poverty Line"
|
"Percent of individuals < 200% Federal Poverty Line"
|
||||||
)
|
)
|
||||||
|
self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||||
|
"Percent of individuals < 200% Federal Poverty Line, imputed"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.ADJUSTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||||
|
"Adjusted percent of individuals < 200% Federal Poverty Line"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME_PRELIMINARY = (
|
||||||
|
"Preliminary adjusted percent of individuals < 200% Federal Poverty Line,"
|
||||||
|
+ " imputed"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME = (
|
||||||
|
"Adjusted percent of individuals < 200% Federal Poverty Line,"
|
||||||
|
+ " imputed"
|
||||||
|
)
|
||||||
|
|
||||||
self.MEDIAN_HOUSE_VALUE_FIELD = "B25077_001E"
|
self.MEDIAN_HOUSE_VALUE_FIELD = "B25077_001E"
|
||||||
self.MEDIAN_HOUSE_VALUE_FIELD_NAME = (
|
self.MEDIAN_HOUSE_VALUE_FIELD_NAME = (
|
||||||
|
@ -136,6 +164,10 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
"Percent enrollment in college or graduate school"
|
"Percent enrollment in college or graduate school"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.IMPUTED_COLLEGE_ATTENDANCE_FIELD = (
|
||||||
|
"Percent enrollment in college or graduate school, imputed"
|
||||||
|
)
|
||||||
|
|
||||||
self.COLLEGE_NON_ATTENDANCE_FIELD = "Percent of population not currently enrolled in college or graduate school"
|
self.COLLEGE_NON_ATTENDANCE_FIELD = "Percent of population not currently enrolled in college or graduate school"
|
||||||
|
|
||||||
self.RE_FIELDS = [
|
self.RE_FIELDS = [
|
||||||
|
@ -153,19 +185,25 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
"B03002_003E",
|
"B03002_003E",
|
||||||
"B03003_001E",
|
"B03003_001E",
|
||||||
"B03003_003E",
|
"B03003_003E",
|
||||||
|
"B02001_007E", # "Some other race alone"
|
||||||
]
|
]
|
||||||
|
|
||||||
# Name output demographics fields.
|
self.BLACK_FIELD_NAME = "Black or African American"
|
||||||
self.BLACK_FIELD_NAME = "Black or African American alone"
|
self.AMERICAN_INDIAN_FIELD_NAME = "American Indian / Alaska Native"
|
||||||
self.AMERICAN_INDIAN_FIELD_NAME = (
|
self.ASIAN_FIELD_NAME = "Asian"
|
||||||
"American Indian and Alaska Native alone"
|
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian or Pacific"
|
||||||
)
|
self.TWO_OR_MORE_RACES_FIELD_NAME = "two or more races"
|
||||||
self.ASIAN_FIELD_NAME = "Asian alone"
|
self.NON_HISPANIC_WHITE_FIELD_NAME = "White"
|
||||||
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian and Other Pacific alone"
|
|
||||||
self.TWO_OR_MORE_RACES_FIELD_NAME = "Two or more races"
|
|
||||||
self.NON_HISPANIC_WHITE_FIELD_NAME = "Non-Hispanic White"
|
|
||||||
self.HISPANIC_FIELD_NAME = "Hispanic or Latino"
|
self.HISPANIC_FIELD_NAME = "Hispanic or Latino"
|
||||||
|
# Note that `other` is lowercase because the whole field will show up in the download
|
||||||
|
# file as "Percent other races"
|
||||||
|
self.OTHER_RACE_FIELD_NAME = "other races"
|
||||||
|
|
||||||
|
self.TOTAL_RACE_POPULATION_FIELD_NAME = (
|
||||||
|
"Total population surveyed on racial data"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Name output demographics fields.
|
||||||
self.RE_OUTPUT_FIELDS = [
|
self.RE_OUTPUT_FIELDS = [
|
||||||
self.BLACK_FIELD_NAME,
|
self.BLACK_FIELD_NAME,
|
||||||
self.AMERICAN_INDIAN_FIELD_NAME,
|
self.AMERICAN_INDIAN_FIELD_NAME,
|
||||||
|
@ -174,32 +212,133 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
self.TWO_OR_MORE_RACES_FIELD_NAME,
|
self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||||
self.NON_HISPANIC_WHITE_FIELD_NAME,
|
self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
self.HISPANIC_FIELD_NAME,
|
self.HISPANIC_FIELD_NAME,
|
||||||
|
self.OTHER_RACE_FIELD_NAME,
|
||||||
]
|
]
|
||||||
|
|
||||||
self.PERCENT_PREFIX = "Percent "
|
# Note: this field does double-duty here. It's used as the total population
|
||||||
|
# within the age questions.
|
||||||
|
# It's also what EJScreen used as their variable for total population in the
|
||||||
|
# census tract, so we use it similarly.
|
||||||
|
# See p. 83 of https://www.epa.gov/sites/default/files/2021-04/documents/ejscreen_technical_document.pdf
|
||||||
|
self.TOTAL_POPULATION_FROM_AGE_TABLE = "B01001_001E" # Estimate!!Total:
|
||||||
|
|
||||||
|
self.AGE_INPUT_FIELDS = [
|
||||||
|
self.TOTAL_POPULATION_FROM_AGE_TABLE,
|
||||||
|
"B01001_003E", # Estimate!!Total:!!Male:!!Under 5 years
|
||||||
|
"B01001_004E", # Estimate!!Total:!!Male:!!5 to 9 years
|
||||||
|
"B01001_005E", # Estimate!!Total:!!Male:!!10 to 14 years
|
||||||
|
"B01001_006E", # Estimate!!Total:!!Male:!!15 to 17 years
|
||||||
|
"B01001_007E", # Estimate!!Total:!!Male:!!18 and 19 years
|
||||||
|
"B01001_008E", # Estimate!!Total:!!Male:!!20 years
|
||||||
|
"B01001_009E", # Estimate!!Total:!!Male:!!21 years
|
||||||
|
"B01001_010E", # Estimate!!Total:!!Male:!!22 to 24 years
|
||||||
|
"B01001_011E", # Estimate!!Total:!!Male:!!25 to 29 years
|
||||||
|
"B01001_012E", # Estimate!!Total:!!Male:!!30 to 34 years
|
||||||
|
"B01001_013E", # Estimate!!Total:!!Male:!!35 to 39 years
|
||||||
|
"B01001_014E", # Estimate!!Total:!!Male:!!40 to 44 years
|
||||||
|
"B01001_015E", # Estimate!!Total:!!Male:!!45 to 49 years
|
||||||
|
"B01001_016E", # Estimate!!Total:!!Male:!!50 to 54 years
|
||||||
|
"B01001_017E", # Estimate!!Total:!!Male:!!55 to 59 years
|
||||||
|
"B01001_018E", # Estimate!!Total:!!Male:!!60 and 61 years
|
||||||
|
"B01001_019E", # Estimate!!Total:!!Male:!!62 to 64 years
|
||||||
|
"B01001_020E", # Estimate!!Total:!!Male:!!65 and 66 years
|
||||||
|
"B01001_021E", # Estimate!!Total:!!Male:!!67 to 69 years
|
||||||
|
"B01001_022E", # Estimate!!Total:!!Male:!!70 to 74 years
|
||||||
|
"B01001_023E", # Estimate!!Total:!!Male:!!75 to 79 years
|
||||||
|
"B01001_024E", # Estimate!!Total:!!Male:!!80 to 84 years
|
||||||
|
"B01001_025E", # Estimate!!Total:!!Male:!!85 years and over
|
||||||
|
"B01001_027E", # Estimate!!Total:!!Female:!!Under 5 years
|
||||||
|
"B01001_028E", # Estimate!!Total:!!Female:!!5 to 9 years
|
||||||
|
"B01001_029E", # Estimate!!Total:!!Female:!!10 to 14 years
|
||||||
|
"B01001_030E", # Estimate!!Total:!!Female:!!15 to 17 years
|
||||||
|
"B01001_031E", # Estimate!!Total:!!Female:!!18 and 19 years
|
||||||
|
"B01001_032E", # Estimate!!Total:!!Female:!!20 years
|
||||||
|
"B01001_033E", # Estimate!!Total:!!Female:!!21 years
|
||||||
|
"B01001_034E", # Estimate!!Total:!!Female:!!22 to 24 years
|
||||||
|
"B01001_035E", # Estimate!!Total:!!Female:!!25 to 29 years
|
||||||
|
"B01001_036E", # Estimate!!Total:!!Female:!!30 to 34 years
|
||||||
|
"B01001_037E", # Estimate!!Total:!!Female:!!35 to 39 years
|
||||||
|
"B01001_038E", # Estimate!!Total:!!Female:!!40 to 44 years
|
||||||
|
"B01001_039E", # Estimate!!Total:!!Female:!!45 to 49 years
|
||||||
|
"B01001_040E", # Estimate!!Total:!!Female:!!50 to 54 years
|
||||||
|
"B01001_041E", # Estimate!!Total:!!Female:!!55 to 59 years
|
||||||
|
"B01001_042E", # Estimate!!Total:!!Female:!!60 and 61 years
|
||||||
|
"B01001_043E", # Estimate!!Total:!!Female:!!62 to 64 years
|
||||||
|
"B01001_044E", # Estimate!!Total:!!Female:!!65 and 66 years
|
||||||
|
"B01001_045E", # Estimate!!Total:!!Female:!!67 to 69 years
|
||||||
|
"B01001_046E", # Estimate!!Total:!!Female:!!70 to 74 years
|
||||||
|
"B01001_047E", # Estimate!!Total:!!Female:!!75 to 79 years
|
||||||
|
"B01001_048E", # Estimate!!Total:!!Female:!!80 to 84 years
|
||||||
|
"B01001_049E", # Estimate!!Total:!!Female:!!85 years and over
|
||||||
|
]
|
||||||
|
|
||||||
|
self.AGE_OUTPUT_FIELDS = [
|
||||||
|
field_names.PERCENT_AGE_UNDER_10,
|
||||||
|
field_names.PERCENT_AGE_10_TO_64,
|
||||||
|
field_names.PERCENT_AGE_OVER_64,
|
||||||
|
]
|
||||||
|
|
||||||
self.STATE_GEOID_FIELD_NAME = "GEOID2"
|
self.STATE_GEOID_FIELD_NAME = "GEOID2"
|
||||||
|
|
||||||
self.COLUMNS_TO_KEEP = (
|
self.COLUMNS_TO_KEEP = (
|
||||||
[
|
[
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
field_names.TOTAL_POP_FIELD,
|
||||||
self.UNEMPLOYED_FIELD_NAME,
|
self.UNEMPLOYED_FIELD_NAME,
|
||||||
self.LINGUISTIC_ISOLATION_FIELD_NAME,
|
self.LINGUISTIC_ISOLATION_FIELD_NAME,
|
||||||
self.MEDIAN_INCOME_FIELD_NAME,
|
self.MEDIAN_INCOME_FIELD_NAME,
|
||||||
self.POVERTY_LESS_THAN_100_PERCENT_FPL_FIELD_NAME,
|
self.POVERTY_LESS_THAN_100_PERCENT_FPL_FIELD_NAME,
|
||||||
self.POVERTY_LESS_THAN_150_PERCENT_FPL_FIELD_NAME,
|
self.POVERTY_LESS_THAN_150_PERCENT_FPL_FIELD_NAME,
|
||||||
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||||
self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
||||||
self.HIGH_SCHOOL_ED_FIELD,
|
self.HIGH_SCHOOL_ED_FIELD,
|
||||||
self.COLLEGE_ATTENDANCE_FIELD,
|
self.COLLEGE_ATTENDANCE_FIELD,
|
||||||
self.COLLEGE_NON_ATTENDANCE_FIELD,
|
self.COLLEGE_NON_ATTENDANCE_FIELD,
|
||||||
|
self.IMPUTED_COLLEGE_ATTENDANCE_FIELD,
|
||||||
|
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME,
|
||||||
]
|
]
|
||||||
+ self.RE_OUTPUT_FIELDS
|
+ self.RE_OUTPUT_FIELDS
|
||||||
+ [self.PERCENT_PREFIX + field for field in self.RE_OUTPUT_FIELDS]
|
+ [
|
||||||
|
field_names.PERCENT_PREFIX + field
|
||||||
|
for field in self.RE_OUTPUT_FIELDS
|
||||||
|
]
|
||||||
|
+ self.AGE_OUTPUT_FIELDS
|
||||||
|
+ [
|
||||||
|
field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
||||||
|
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
|
||||||
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
# pylint: disable=too-many-arguments
|
||||||
|
def _merge_geojson(
|
||||||
|
self,
|
||||||
|
df: pd.DataFrame,
|
||||||
|
usa_geo_df: gpd.GeoDataFrame,
|
||||||
|
geoid_field: str = "GEOID10",
|
||||||
|
geometry_field: str = "geometry",
|
||||||
|
state_code_field: str = "STATEFP10",
|
||||||
|
county_code_field: str = "COUNTYFP10",
|
||||||
|
) -> gpd.GeoDataFrame:
|
||||||
|
usa_geo_df[geoid_field] = (
|
||||||
|
usa_geo_df[geoid_field].astype(str).str.zfill(11)
|
||||||
|
)
|
||||||
|
return gpd.GeoDataFrame(
|
||||||
|
df.merge(
|
||||||
|
usa_geo_df[
|
||||||
|
[
|
||||||
|
geoid_field,
|
||||||
|
geometry_field,
|
||||||
|
state_code_field,
|
||||||
|
county_code_field,
|
||||||
|
]
|
||||||
|
],
|
||||||
|
left_on=[self.GEOID_TRACT_FIELD_NAME],
|
||||||
|
right_on=[geoid_field],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
# Define the variables to retrieve
|
# Define the variables to retrieve
|
||||||
variables = (
|
variables = (
|
||||||
|
@ -213,6 +352,7 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
+ self.EDUCATIONAL_FIELDS
|
+ self.EDUCATIONAL_FIELDS
|
||||||
+ self.RE_FIELDS
|
+ self.RE_FIELDS
|
||||||
+ self.COLLEGE_ATTENDANCE_FIELDS
|
+ self.COLLEGE_ATTENDANCE_FIELDS
|
||||||
|
+ self.AGE_INPUT_FIELDS
|
||||||
)
|
)
|
||||||
|
|
||||||
self.df = retrieve_census_acs_data(
|
self.df = retrieve_census_acs_data(
|
||||||
|
@ -227,12 +367,37 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
|
|
||||||
df = self.df
|
df = self.df
|
||||||
|
|
||||||
# Rename two fields.
|
# Here we join the geometry of the US to the dataframe so that we can impute
|
||||||
|
# The income of neighbors. first this looks locally; if there's no local
|
||||||
|
# geojson file for all of the US, this will read it off of S3
|
||||||
|
logger.info("Reading in geojson for the country")
|
||||||
|
if not os.path.exists(
|
||||||
|
self.DATA_PATH / "census" / "geojson" / "us.json"
|
||||||
|
):
|
||||||
|
logger.info("Fetching Census data from AWS S3")
|
||||||
|
unzip_file_from_url(
|
||||||
|
CENSUS_DATA_S3_URL,
|
||||||
|
self.DATA_PATH / "tmp",
|
||||||
|
self.DATA_PATH,
|
||||||
|
)
|
||||||
|
|
||||||
|
geo_df = gpd.read_file(
|
||||||
|
self.DATA_PATH / "census" / "geojson" / "us.json",
|
||||||
|
)
|
||||||
|
|
||||||
|
df = self._merge_geojson(
|
||||||
|
df=df,
|
||||||
|
usa_geo_df=geo_df,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Rename some fields.
|
||||||
df = df.rename(
|
df = df.rename(
|
||||||
columns={
|
columns={
|
||||||
self.MEDIAN_HOUSE_VALUE_FIELD: self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
self.MEDIAN_HOUSE_VALUE_FIELD: self.MEDIAN_HOUSE_VALUE_FIELD_NAME,
|
||||||
self.MEDIAN_INCOME_FIELD: self.MEDIAN_INCOME_FIELD_NAME,
|
self.MEDIAN_INCOME_FIELD: self.MEDIAN_INCOME_FIELD_NAME,
|
||||||
}
|
self.TOTAL_POPULATION_FROM_AGE_TABLE: field_names.TOTAL_POP_FIELD,
|
||||||
|
},
|
||||||
|
errors="raise",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Handle null values for various fields, which are `-666666666`.
|
# Handle null values for various fields, which are `-666666666`.
|
||||||
|
@ -318,38 +483,101 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
# Calculate some demographic information.
|
# Calculate some demographic information.
|
||||||
df[self.BLACK_FIELD_NAME] = df["B02001_003E"]
|
df = df.rename(
|
||||||
df[self.AMERICAN_INDIAN_FIELD_NAME] = df["B02001_004E"]
|
columns={
|
||||||
df[self.ASIAN_FIELD_NAME] = df["B02001_005E"]
|
"B02001_003E": self.BLACK_FIELD_NAME,
|
||||||
df[self.HAWAIIAN_FIELD_NAME] = df["B02001_006E"]
|
"B02001_004E": self.AMERICAN_INDIAN_FIELD_NAME,
|
||||||
df[self.TWO_OR_MORE_RACES_FIELD_NAME] = df["B02001_008E"]
|
"B02001_005E": self.ASIAN_FIELD_NAME,
|
||||||
df[self.NON_HISPANIC_WHITE_FIELD_NAME] = df["B03002_003E"]
|
"B02001_006E": self.HAWAIIAN_FIELD_NAME,
|
||||||
df[self.HISPANIC_FIELD_NAME] = df["B03003_003E"]
|
"B02001_008E": self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||||
|
"B03002_003E": self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
# Calculate demographics as percent
|
"B03003_003E": self.HISPANIC_FIELD_NAME,
|
||||||
df[self.PERCENT_PREFIX + self.BLACK_FIELD_NAME] = (
|
"B02001_007E": self.OTHER_RACE_FIELD_NAME,
|
||||||
df["B02001_003E"] / df["B02001_001E"]
|
"B02001_001E": self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||||
)
|
},
|
||||||
df[self.PERCENT_PREFIX + self.AMERICAN_INDIAN_FIELD_NAME] = (
|
errors="raise",
|
||||||
df["B02001_004E"] / df["B02001_001E"]
|
|
||||||
)
|
|
||||||
df[self.PERCENT_PREFIX + self.ASIAN_FIELD_NAME] = (
|
|
||||||
df["B02001_005E"] / df["B02001_001E"]
|
|
||||||
)
|
|
||||||
df[self.PERCENT_PREFIX + self.HAWAIIAN_FIELD_NAME] = (
|
|
||||||
df["B02001_006E"] / df["B02001_001E"]
|
|
||||||
)
|
|
||||||
df[self.PERCENT_PREFIX + self.TWO_OR_MORE_RACES_FIELD_NAME] = (
|
|
||||||
df["B02001_008E"] / df["B02001_001E"]
|
|
||||||
)
|
|
||||||
df[self.PERCENT_PREFIX + self.NON_HISPANIC_WHITE_FIELD_NAME] = (
|
|
||||||
df["B03002_003E"] / df["B03002_001E"]
|
|
||||||
)
|
|
||||||
df[self.PERCENT_PREFIX + self.HISPANIC_FIELD_NAME] = (
|
|
||||||
df["B03003_003E"] / df["B03003_001E"]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Calculate college attendance:
|
for race_field_name in self.RE_OUTPUT_FIELDS:
|
||||||
|
df[field_names.PERCENT_PREFIX + race_field_name] = (
|
||||||
|
df[race_field_name] / df[self.TOTAL_RACE_POPULATION_FIELD_NAME]
|
||||||
|
)
|
||||||
|
|
||||||
|
# First value is the `age bucket`, and the second value is a list of all fields
|
||||||
|
# that will be summed in the calculations of the total population in that age
|
||||||
|
# bucket.
|
||||||
|
age_bucket_and_its_sum_columns = [
|
||||||
|
(
|
||||||
|
field_names.PERCENT_AGE_UNDER_10,
|
||||||
|
[
|
||||||
|
"B01001_003E", # Estimate!!Total:!!Male:!!Under 5 years
|
||||||
|
"B01001_004E", # Estimate!!Total:!!Male:!!5 to 9 years
|
||||||
|
"B01001_027E", # Estimate!!Total:!!Female:!!Under 5 years
|
||||||
|
"B01001_028E", # Estimate!!Total:!!Female:!!5 to 9 years
|
||||||
|
],
|
||||||
|
),
|
||||||
|
(
|
||||||
|
field_names.PERCENT_AGE_10_TO_64,
|
||||||
|
[
|
||||||
|
"B01001_005E", # Estimate!!Total:!!Male:!!10 to 14 years
|
||||||
|
"B01001_006E", # Estimate!!Total:!!Male:!!15 to 17 years
|
||||||
|
"B01001_007E", # Estimate!!Total:!!Male:!!18 and 19 years
|
||||||
|
"B01001_008E", # Estimate!!Total:!!Male:!!20 years
|
||||||
|
"B01001_009E", # Estimate!!Total:!!Male:!!21 years
|
||||||
|
"B01001_010E", # Estimate!!Total:!!Male:!!22 to 24 years
|
||||||
|
"B01001_011E", # Estimate!!Total:!!Male:!!25 to 29 years
|
||||||
|
"B01001_012E", # Estimate!!Total:!!Male:!!30 to 34 years
|
||||||
|
"B01001_013E", # Estimate!!Total:!!Male:!!35 to 39 years
|
||||||
|
"B01001_014E", # Estimate!!Total:!!Male:!!40 to 44 years
|
||||||
|
"B01001_015E", # Estimate!!Total:!!Male:!!45 to 49 years
|
||||||
|
"B01001_016E", # Estimate!!Total:!!Male:!!50 to 54 years
|
||||||
|
"B01001_017E", # Estimate!!Total:!!Male:!!55 to 59 years
|
||||||
|
"B01001_018E", # Estimate!!Total:!!Male:!!60 and 61 years
|
||||||
|
"B01001_019E", # Estimate!!Total:!!Male:!!62 to 64 years
|
||||||
|
"B01001_029E", # Estimate!!Total:!!Female:!!10 to 14 years
|
||||||
|
"B01001_030E", # Estimate!!Total:!!Female:!!15 to 17 years
|
||||||
|
"B01001_031E", # Estimate!!Total:!!Female:!!18 and 19 years
|
||||||
|
"B01001_032E", # Estimate!!Total:!!Female:!!20 years
|
||||||
|
"B01001_033E", # Estimate!!Total:!!Female:!!21 years
|
||||||
|
"B01001_034E", # Estimate!!Total:!!Female:!!22 to 24 years
|
||||||
|
"B01001_035E", # Estimate!!Total:!!Female:!!25 to 29 years
|
||||||
|
"B01001_036E", # Estimate!!Total:!!Female:!!30 to 34 years
|
||||||
|
"B01001_037E", # Estimate!!Total:!!Female:!!35 to 39 years
|
||||||
|
"B01001_038E", # Estimate!!Total:!!Female:!!40 to 44 years
|
||||||
|
"B01001_039E", # Estimate!!Total:!!Female:!!45 to 49 years
|
||||||
|
"B01001_040E", # Estimate!!Total:!!Female:!!50 to 54 years
|
||||||
|
"B01001_041E", # Estimate!!Total:!!Female:!!55 to 59 years
|
||||||
|
"B01001_042E", # Estimate!!Total:!!Female:!!60 and 61 years
|
||||||
|
"B01001_043E", # Estimate!!Total:!!Female:!!62 to 64 years
|
||||||
|
],
|
||||||
|
),
|
||||||
|
(
|
||||||
|
field_names.PERCENT_AGE_OVER_64,
|
||||||
|
[
|
||||||
|
"B01001_020E", # Estimate!!Total:!!Male:!!65 and 66 years
|
||||||
|
"B01001_021E", # Estimate!!Total:!!Male:!!67 to 69 years
|
||||||
|
"B01001_022E", # Estimate!!Total:!!Male:!!70 to 74 years
|
||||||
|
"B01001_023E", # Estimate!!Total:!!Male:!!75 to 79 years
|
||||||
|
"B01001_024E", # Estimate!!Total:!!Male:!!80 to 84 years
|
||||||
|
"B01001_025E", # Estimate!!Total:!!Male:!!85 years and over
|
||||||
|
"B01001_044E", # Estimate!!Total:!!Female:!!65 and 66 years
|
||||||
|
"B01001_045E", # Estimate!!Total:!!Female:!!67 to 69 years
|
||||||
|
"B01001_046E", # Estimate!!Total:!!Female:!!70 to 74 years
|
||||||
|
"B01001_047E", # Estimate!!Total:!!Female:!!75 to 79 years
|
||||||
|
"B01001_048E", # Estimate!!Total:!!Female:!!80 to 84 years
|
||||||
|
"B01001_049E", # Estimate!!Total:!!Female:!!85 years and over
|
||||||
|
],
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
# For each age bucket, sum the relevant columns and calculate the total
|
||||||
|
# percentage.
|
||||||
|
for age_bucket, sum_columns in age_bucket_and_its_sum_columns:
|
||||||
|
df[age_bucket] = (
|
||||||
|
df[sum_columns].sum(axis=1) / df[field_names.TOTAL_POP_FIELD]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Calculate college attendance and adjust low income
|
||||||
df[self.COLLEGE_ATTENDANCE_FIELD] = (
|
df[self.COLLEGE_ATTENDANCE_FIELD] = (
|
||||||
df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PUBLIC]
|
df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PUBLIC]
|
||||||
+ df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PRIVATE]
|
+ df[self.COLLEGE_ATTENDANCE_MALE_ENROLLED_PRIVATE]
|
||||||
|
@ -361,26 +589,75 @@ class CensusACSETL(ExtractTransformLoad):
|
||||||
1 - df[self.COLLEGE_ATTENDANCE_FIELD]
|
1 - df[self.COLLEGE_ATTENDANCE_FIELD]
|
||||||
)
|
)
|
||||||
|
|
||||||
# strip columns
|
# we impute income for both income measures
|
||||||
df = df[self.COLUMNS_TO_KEEP]
|
## TODO: Convert to pydantic for clarity
|
||||||
|
logger.info("Imputing income information")
|
||||||
# Save results to self.
|
ImputeVariables = namedtuple(
|
||||||
self.df = df
|
"ImputeVariables", ["raw_field_name", "imputed_field_name"]
|
||||||
|
|
||||||
# rename columns to be used in score
|
|
||||||
rename_fields = {
|
|
||||||
"Percent of individuals < 200% Federal Poverty Line": field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
|
||||||
}
|
|
||||||
self.df.rename(
|
|
||||||
columns=rename_fields,
|
|
||||||
inplace=True,
|
|
||||||
errors="raise",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def load(self) -> None:
|
df = calculate_income_measures(
|
||||||
logger.info("Saving Census ACS Data")
|
impute_var_named_tup_list=[
|
||||||
|
ImputeVariables(
|
||||||
|
raw_field_name=self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||||
|
imputed_field_name=self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME,
|
||||||
|
),
|
||||||
|
ImputeVariables(
|
||||||
|
raw_field_name=self.COLLEGE_ATTENDANCE_FIELD,
|
||||||
|
imputed_field_name=self.IMPUTED_COLLEGE_ATTENDANCE_FIELD,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
geo_df=df,
|
||||||
|
geoid_field=self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
minimum_population_required_for_imputation=self.MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION,
|
||||||
|
)
|
||||||
|
|
||||||
# mkdir census
|
logger.info("Calculating with imputed values")
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
df[
|
||||||
|
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME
|
||||||
|
] = (
|
||||||
|
df[self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME].fillna(
|
||||||
|
df[self.IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME]
|
||||||
|
)
|
||||||
|
- df[self.COLLEGE_ATTENDANCE_FIELD].fillna(
|
||||||
|
df[self.IMPUTED_COLLEGE_ATTENDANCE_FIELD]
|
||||||
|
)
|
||||||
|
# Use clip to ensure that the values are not negative if college attendance
|
||||||
|
# is very high
|
||||||
|
).clip(
|
||||||
|
lower=0
|
||||||
|
)
|
||||||
|
|
||||||
|
# All values should have a value at this point
|
||||||
|
assert (
|
||||||
|
# For tracts with >0 population
|
||||||
|
df[
|
||||||
|
df[field_names.TOTAL_POP_FIELD]
|
||||||
|
>= self.MINIMUM_POPULATION_REQUIRED_FOR_IMPUTATION
|
||||||
|
][
|
||||||
|
# Then the imputed field should have no nulls
|
||||||
|
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME
|
||||||
|
]
|
||||||
|
.isna()
|
||||||
|
.sum()
|
||||||
|
== 0
|
||||||
|
), "Error: not all values were filled..."
|
||||||
|
|
||||||
|
logger.info("Renaming columns...")
|
||||||
|
df = df.rename(
|
||||||
|
columns={
|
||||||
|
self.ADJUSTED_AND_IMPUTED_POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME: field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
|
||||||
|
self.POVERTY_LESS_THAN_200_PERCENT_FPL_FIELD_NAME: field_names.POVERTY_LESS_THAN_200_FPL_FIELD,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# We generate a boolean that is TRUE when there is an imputed income but not a baseline income, and FALSE otherwise.
|
||||||
|
# This allows us to see which tracts have an imputed income.
|
||||||
|
df[field_names.IMPUTED_INCOME_FLAG_FIELD_NAME] = (
|
||||||
|
df[field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD].notna()
|
||||||
|
& df[field_names.POVERTY_LESS_THAN_200_FPL_FIELD].isna()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save results to self.
|
||||||
|
self.output_df = df
|
||||||
|
|
|
@ -0,0 +1,166 @@
|
||||||
|
from typing import Any
|
||||||
|
from typing import List
|
||||||
|
from typing import NamedTuple
|
||||||
|
from typing import Tuple
|
||||||
|
|
||||||
|
import geopandas as gpd
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
# pylint: disable=unsubscriptable-object
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_fips_mask(
|
||||||
|
geo_df: gpd.GeoDataFrame,
|
||||||
|
row: gpd.GeoSeries,
|
||||||
|
fips_digits: int,
|
||||||
|
geoid_field: str = "GEOID10_TRACT",
|
||||||
|
) -> pd.Series:
|
||||||
|
return (
|
||||||
|
geo_df[geoid_field].str[:fips_digits] == row[geoid_field][:fips_digits]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_neighbor_mask(
|
||||||
|
geo_df: gpd.GeoDataFrame, row: gpd.GeoSeries
|
||||||
|
) -> pd.Series:
|
||||||
|
"""Returns neighboring tracts."""
|
||||||
|
return geo_df["geometry"].touches(row["geometry"])
|
||||||
|
|
||||||
|
|
||||||
|
def _choose_best_mask(
|
||||||
|
geo_df: gpd.GeoDataFrame,
|
||||||
|
masks_in_priority_order: List[pd.Series],
|
||||||
|
column_to_impute: str,
|
||||||
|
) -> pd.Series:
|
||||||
|
for mask in masks_in_priority_order:
|
||||||
|
if any(geo_df[mask][column_to_impute].notna()):
|
||||||
|
return mask
|
||||||
|
raise Exception("No mask found")
|
||||||
|
|
||||||
|
|
||||||
|
def _prepare_dataframe_for_imputation(
|
||||||
|
impute_var_named_tup_list: List[NamedTuple],
|
||||||
|
geo_df: gpd.GeoDataFrame,
|
||||||
|
population_field: str,
|
||||||
|
minimum_population_required_for_imputation: int = 1,
|
||||||
|
geoid_field: str = "GEOID10_TRACT",
|
||||||
|
) -> Tuple[Any, gpd.GeoDataFrame]:
|
||||||
|
"""Helper for imputation.
|
||||||
|
|
||||||
|
Given the inputs of `ImputeVariables`, returns list of tracts that need to be
|
||||||
|
imputed, along with a GeoDataFrame that has a column with the imputed field
|
||||||
|
"primed", meaning it is a copy of the raw field.
|
||||||
|
|
||||||
|
Will drop any rows with population less than
|
||||||
|
`minimum_population_required_for_imputation`.
|
||||||
|
"""
|
||||||
|
imputing_cols = [
|
||||||
|
impute_var_pair.raw_field_name
|
||||||
|
for impute_var_pair in impute_var_named_tup_list
|
||||||
|
]
|
||||||
|
|
||||||
|
# Prime column to exist
|
||||||
|
for impute_var_pair in impute_var_named_tup_list:
|
||||||
|
geo_df[impute_var_pair.imputed_field_name] = geo_df[
|
||||||
|
impute_var_pair.raw_field_name
|
||||||
|
].copy()
|
||||||
|
|
||||||
|
# Generate a list of tracts for which at least one of the imputation
|
||||||
|
# columns is null that also meets population criteria.
|
||||||
|
tract_list = geo_df[
|
||||||
|
(
|
||||||
|
# First, check whether any of the columns we want to impute contain null
|
||||||
|
# values
|
||||||
|
geo_df[imputing_cols].isna().any(axis=1)
|
||||||
|
# Second, ensure population is not null and >= the minimum population
|
||||||
|
& (
|
||||||
|
geo_df[population_field].notnull()
|
||||||
|
& (
|
||||||
|
geo_df[population_field]
|
||||||
|
>= minimum_population_required_for_imputation
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
][geoid_field].unique()
|
||||||
|
|
||||||
|
# Check that imputation is a valid choice for this set of fields
|
||||||
|
logger.info(f"Imputing values for {len(tract_list)} unique tracts.")
|
||||||
|
assert len(tract_list) > 0, "Error: No missing values to impute"
|
||||||
|
|
||||||
|
return tract_list, geo_df
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_income_measures(
|
||||||
|
impute_var_named_tup_list: list,
|
||||||
|
geo_df: gpd.GeoDataFrame,
|
||||||
|
geoid_field: str,
|
||||||
|
population_field: str = field_names.TOTAL_POP_FIELD,
|
||||||
|
minimum_population_required_for_imputation: int = 1,
|
||||||
|
) -> pd.DataFrame:
|
||||||
|
"""Impute values based on geographic neighbors
|
||||||
|
|
||||||
|
We only want to check neighbors a single time, so all variables
|
||||||
|
that we impute get imputed here.
|
||||||
|
|
||||||
|
Takes in:
|
||||||
|
required:
|
||||||
|
impute_var_named_tup_list: list of named tuples (imputed field, raw field)
|
||||||
|
geo_df: geo dataframe that already has the census shapefiles merged
|
||||||
|
geoid field: tract level ID
|
||||||
|
|
||||||
|
Returns: non-geometry pd.DataFrame
|
||||||
|
"""
|
||||||
|
# Determine where to impute variables and fill a column with nulls
|
||||||
|
tract_list, geo_df = _prepare_dataframe_for_imputation(
|
||||||
|
impute_var_named_tup_list=impute_var_named_tup_list,
|
||||||
|
geo_df=geo_df,
|
||||||
|
geoid_field=geoid_field,
|
||||||
|
population_field=population_field,
|
||||||
|
minimum_population_required_for_imputation=minimum_population_required_for_imputation,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Iterate through the dataframe to impute in place
|
||||||
|
## TODO: We should probably convert this to a spatial join now that we are doing >1 imputation and it's taking a lot
|
||||||
|
## of time, but thinking through how to do this while maintaining the masking will take some time. I think the best
|
||||||
|
## way would be to (1) spatial join to all neighbors, and then (2) iterate to take the "smallest" set of neighbors...
|
||||||
|
## but haven't implemented it yet.
|
||||||
|
for index, row in geo_df.iterrows():
|
||||||
|
if row[geoid_field] in tract_list:
|
||||||
|
neighbor_mask = _get_neighbor_mask(geo_df, row)
|
||||||
|
county_mask = _get_fips_mask(
|
||||||
|
geo_df=geo_df, row=row, fips_digits=5, geoid_field=geoid_field
|
||||||
|
)
|
||||||
|
## TODO: Did CEQ decide to cut this?
|
||||||
|
state_mask = _get_fips_mask(
|
||||||
|
geo_df=geo_df, row=row, fips_digits=2, geoid_field=geoid_field
|
||||||
|
)
|
||||||
|
|
||||||
|
# Impute fields for every row missing at least one value using the best possible set of neighbors
|
||||||
|
# Note that later, we will pull raw.fillna(imputed), so the mechanics of this step aren't critical
|
||||||
|
for impute_var_pair in impute_var_named_tup_list:
|
||||||
|
mask_to_use = _choose_best_mask(
|
||||||
|
geo_df=geo_df,
|
||||||
|
masks_in_priority_order=[
|
||||||
|
neighbor_mask,
|
||||||
|
county_mask,
|
||||||
|
state_mask,
|
||||||
|
],
|
||||||
|
column_to_impute=impute_var_pair.raw_field_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
geo_df.loc[index, impute_var_pair.imputed_field_name] = geo_df[
|
||||||
|
mask_to_use
|
||||||
|
][impute_var_pair.raw_field_name].mean()
|
||||||
|
|
||||||
|
logger.info("Casting geodataframe as a typical dataframe")
|
||||||
|
# get rid of the geometry column and cast as a typical df
|
||||||
|
df = pd.DataFrame(
|
||||||
|
geo_df[[col for col in geo_df.columns if col != "geometry"]]
|
||||||
|
)
|
||||||
|
|
||||||
|
# finally, return the df
|
||||||
|
return df
|
|
@ -1,9 +1,9 @@
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
import censusdata
|
import censusdata
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
|
|
@ -1,11 +1,10 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
from data_pipeline.etl.sources.census_acs.etl_utils import (
|
||||||
retrieve_census_acs_data,
|
retrieve_census_acs_data,
|
||||||
)
|
)
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
|
@ -1,13 +1,14 @@
|
||||||
import json
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.utils import unzip_file_from_url, download_file_from_url
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.utils import download_file_from_url
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -282,12 +283,20 @@ class CensusACSMedianIncomeETL(ExtractTransformLoad):
|
||||||
|
|
||||||
# Download MSA median incomes
|
# Download MSA median incomes
|
||||||
logger.info("Starting download of MSA median incomes.")
|
logger.info("Starting download of MSA median incomes.")
|
||||||
download = requests.get(self.MSA_MEDIAN_INCOME_URL, verify=None)
|
download = requests.get(
|
||||||
|
self.MSA_MEDIAN_INCOME_URL,
|
||||||
|
verify=None,
|
||||||
|
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||||
|
)
|
||||||
self.msa_median_incomes = json.loads(download.content)
|
self.msa_median_incomes = json.loads(download.content)
|
||||||
|
|
||||||
# Download state median incomes
|
# Download state median incomes
|
||||||
logger.info("Starting download of state median incomes.")
|
logger.info("Starting download of state median incomes.")
|
||||||
download_state = requests.get(self.STATE_MEDIAN_INCOME_URL, verify=None)
|
download_state = requests.get(
|
||||||
|
self.STATE_MEDIAN_INCOME_URL,
|
||||||
|
verify=None,
|
||||||
|
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||||
|
)
|
||||||
self.state_median_incomes = json.loads(download_state.content)
|
self.state_median_incomes = json.loads(download_state.content)
|
||||||
## NOTE we already have PR's MI here
|
## NOTE we already have PR's MI here
|
||||||
|
|
||||||
|
|
|
@ -1,12 +1,13 @@
|
||||||
import json
|
import json
|
||||||
import requests
|
from typing import List
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
import requests
|
||||||
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
pd.options.mode.chained_assignment = "raise"
|
pd.options.mode.chained_assignment = "raise"
|
||||||
|
|
||||||
|
@ -146,6 +147,63 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Race/Ethnicity fields
|
||||||
|
self.TOTAL_RACE_POPULATION_FIELD = "PCT086001" # Total
|
||||||
|
self.ASIAN_FIELD = "PCT086002" # Total!!Asian
|
||||||
|
self.BLACK_FIELD = "PCT086003" # Total!!Black or African American
|
||||||
|
self.HAWAIIAN_FIELD = (
|
||||||
|
"PCT086004" # Total!!Native Hawaiian and Other Pacific Islander
|
||||||
|
)
|
||||||
|
# Note that the 2010 census for island araeas does not break out
|
||||||
|
# hispanic and non-hispanic white, so this is slightly different from
|
||||||
|
# our other demographic data
|
||||||
|
self.NON_HISPANIC_WHITE_FIELD = "PCT086005" # Total!!White
|
||||||
|
self.HISPANIC_FIELD = "PCT086006" # Total!!Hispanic or Latino
|
||||||
|
self.OTHER_RACE_FIELD = "PCT086007" # Total!!Other Ethnic Origin or Ra
|
||||||
|
|
||||||
|
self.TOTAL_RACE_POPULATION_VI_FIELD = "P003001" # Total
|
||||||
|
self.BLACK_VI_FIELD = (
|
||||||
|
"P003003" # Total!!One race!!Black or African American alone
|
||||||
|
)
|
||||||
|
self.AMERICAN_INDIAN_VI_FIELD = "P003005" # Total!!One race!!American Indian and Alaska Native alone
|
||||||
|
self.ASIAN_VI_FIELD = "P003006" # Total!!One race!!Asian alone
|
||||||
|
self.HAWAIIAN_VI_FIELD = "P003007" # Total!!One race!!Native Hawaiian and Other Pacific Islander alone
|
||||||
|
self.TWO_OR_MORE_RACES_VI_FIELD = "P003009" # Total!!Two or More Races
|
||||||
|
self.NON_HISPANIC_WHITE_VI_FIELD = (
|
||||||
|
"P005006" # Total!!Not Hispanic or Latino!!One race!!White alone
|
||||||
|
)
|
||||||
|
self.HISPANIC_VI_FIELD = "P005002" # Total!!Hispanic or Latino
|
||||||
|
self.OTHER_RACE_VI_FIELD = (
|
||||||
|
"P003008" # Total!!One race!!Some Other Race alone
|
||||||
|
)
|
||||||
|
self.TOTAL_RACE_POPULATION_VI_FIELD = "P003001" # Total
|
||||||
|
|
||||||
|
self.TOTAL_RACE_POPULATION_FIELD_NAME = (
|
||||||
|
"Total population surveyed on racial data"
|
||||||
|
)
|
||||||
|
self.BLACK_FIELD_NAME = "Black or African American"
|
||||||
|
self.AMERICAN_INDIAN_FIELD_NAME = "American Indian / Alaska Native"
|
||||||
|
self.ASIAN_FIELD_NAME = "Asian"
|
||||||
|
self.HAWAIIAN_FIELD_NAME = "Native Hawaiian or Pacific"
|
||||||
|
self.TWO_OR_MORE_RACES_FIELD_NAME = "two or more races"
|
||||||
|
self.NON_HISPANIC_WHITE_FIELD_NAME = "White"
|
||||||
|
self.HISPANIC_FIELD_NAME = "Hispanic or Latino"
|
||||||
|
# Note that `other` is lowercase because the whole field will show up in the download
|
||||||
|
# file as "Percent other races"
|
||||||
|
self.OTHER_RACE_FIELD_NAME = "other races"
|
||||||
|
|
||||||
|
# Name output demographics fields.
|
||||||
|
self.RE_OUTPUT_FIELDS = [
|
||||||
|
self.BLACK_FIELD_NAME,
|
||||||
|
self.AMERICAN_INDIAN_FIELD_NAME,
|
||||||
|
self.ASIAN_FIELD_NAME,
|
||||||
|
self.HAWAIIAN_FIELD_NAME,
|
||||||
|
self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||||
|
self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
|
self.HISPANIC_FIELD_NAME,
|
||||||
|
self.OTHER_RACE_FIELD_NAME,
|
||||||
|
]
|
||||||
|
|
||||||
var_list = [
|
var_list = [
|
||||||
self.MEDIAN_INCOME_FIELD,
|
self.MEDIAN_INCOME_FIELD,
|
||||||
self.TOTAL_HOUSEHOLD_RATIO_INCOME_TO_POVERTY_LEVEL_FIELD,
|
self.TOTAL_HOUSEHOLD_RATIO_INCOME_TO_POVERTY_LEVEL_FIELD,
|
||||||
|
@ -161,6 +219,13 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
||||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
||||||
self.TOTAL_POP_FIELD,
|
self.TOTAL_POP_FIELD,
|
||||||
|
self.TOTAL_RACE_POPULATION_FIELD,
|
||||||
|
self.ASIAN_FIELD,
|
||||||
|
self.BLACK_FIELD,
|
||||||
|
self.HAWAIIAN_FIELD,
|
||||||
|
self.NON_HISPANIC_WHITE_FIELD,
|
||||||
|
self.HISPANIC_FIELD,
|
||||||
|
self.OTHER_RACE_FIELD,
|
||||||
]
|
]
|
||||||
var_list = ",".join(var_list)
|
var_list = ",".join(var_list)
|
||||||
|
|
||||||
|
@ -179,6 +244,15 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_VI_FIELD,
|
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_VI_FIELD,
|
||||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_VI_FIELD,
|
self.EMPLOYMENT_FEMALE_UNEMPLOYED_VI_FIELD,
|
||||||
self.TOTAL_POP_VI_FIELD,
|
self.TOTAL_POP_VI_FIELD,
|
||||||
|
self.BLACK_VI_FIELD,
|
||||||
|
self.AMERICAN_INDIAN_VI_FIELD,
|
||||||
|
self.ASIAN_VI_FIELD,
|
||||||
|
self.HAWAIIAN_VI_FIELD,
|
||||||
|
self.TWO_OR_MORE_RACES_VI_FIELD,
|
||||||
|
self.NON_HISPANIC_WHITE_VI_FIELD,
|
||||||
|
self.HISPANIC_VI_FIELD,
|
||||||
|
self.OTHER_RACE_VI_FIELD,
|
||||||
|
self.TOTAL_RACE_POPULATION_VI_FIELD,
|
||||||
]
|
]
|
||||||
var_list_vi = ",".join(var_list_vi)
|
var_list_vi = ",".join(var_list_vi)
|
||||||
|
|
||||||
|
@ -209,6 +283,23 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD,
|
self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_MALE_UNEMPLOYED_FIELD,
|
||||||
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD: self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD: self.EMPLOYMENT_FEMALE_IN_LABOR_FORCE_FIELD,
|
||||||
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD: self.EMPLOYMENT_FEMALE_UNEMPLOYED_FIELD,
|
||||||
|
self.TOTAL_RACE_POPULATION_FIELD: self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||||
|
self.TOTAL_RACE_POPULATION_VI_FIELD: self.TOTAL_RACE_POPULATION_FIELD_NAME,
|
||||||
|
# Note there is no American Indian data for AS/GU/MI
|
||||||
|
self.AMERICAN_INDIAN_VI_FIELD: self.AMERICAN_INDIAN_FIELD_NAME,
|
||||||
|
self.ASIAN_FIELD: self.ASIAN_FIELD_NAME,
|
||||||
|
self.ASIAN_VI_FIELD: self.ASIAN_FIELD_NAME,
|
||||||
|
self.BLACK_FIELD: self.BLACK_FIELD_NAME,
|
||||||
|
self.BLACK_VI_FIELD: self.BLACK_FIELD_NAME,
|
||||||
|
self.HAWAIIAN_FIELD: self.HAWAIIAN_FIELD_NAME,
|
||||||
|
self.HAWAIIAN_VI_FIELD: self.HAWAIIAN_FIELD_NAME,
|
||||||
|
self.TWO_OR_MORE_RACES_VI_FIELD: self.TWO_OR_MORE_RACES_FIELD_NAME,
|
||||||
|
self.NON_HISPANIC_WHITE_FIELD: self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
|
self.NON_HISPANIC_WHITE_VI_FIELD: self.NON_HISPANIC_WHITE_FIELD_NAME,
|
||||||
|
self.HISPANIC_FIELD: self.HISPANIC_FIELD_NAME,
|
||||||
|
self.HISPANIC_VI_FIELD: self.HISPANIC_FIELD_NAME,
|
||||||
|
self.OTHER_RACE_FIELD: self.OTHER_RACE_FIELD_NAME,
|
||||||
|
self.OTHER_RACE_VI_FIELD: self.OTHER_RACE_FIELD_NAME,
|
||||||
}
|
}
|
||||||
|
|
||||||
# To do: Ask Census Slack Group about whether you need to hardcode the county fips
|
# To do: Ask Census Slack Group about whether you need to hardcode the county fips
|
||||||
|
@ -251,6 +342,8 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
+ "&for=tract:*&in=state:{}%20county:{}"
|
+ "&for=tract:*&in=state:{}%20county:{}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.final_race_fields: List[str] = []
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
self.df_vi: pd.DataFrame
|
self.df_vi: pd.DataFrame
|
||||||
self.df_all: pd.DataFrame
|
self.df_all: pd.DataFrame
|
||||||
|
@ -263,14 +356,17 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
f"Downloading data for state/territory {island['state_abbreviation']}"
|
f"Downloading data for state/territory {island['state_abbreviation']}"
|
||||||
)
|
)
|
||||||
for county in island["county_fips"]:
|
for county in island["county_fips"]:
|
||||||
|
api_url = self.API_URL.format(
|
||||||
|
self.DECENNIAL_YEAR,
|
||||||
|
island["state_abbreviation"],
|
||||||
|
island["var_list"],
|
||||||
|
island["fips"],
|
||||||
|
county,
|
||||||
|
)
|
||||||
|
logger.debug(f"CENSUS: Requesting {api_url}")
|
||||||
download = requests.get(
|
download = requests.get(
|
||||||
self.API_URL.format(
|
api_url,
|
||||||
self.DECENNIAL_YEAR,
|
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||||
island["state_abbreviation"],
|
|
||||||
island["var_list"],
|
|
||||||
island["fips"],
|
|
||||||
county,
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
df = json.loads(download.content)
|
df = json.loads(download.content)
|
||||||
|
@ -377,6 +473,19 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
self.df_all["state"] + self.df_all["county"] + self.df_all["tract"]
|
self.df_all["state"] + self.df_all["county"] + self.df_all["tract"]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Calculate stats by race
|
||||||
|
for race_field_name in self.RE_OUTPUT_FIELDS:
|
||||||
|
output_field_name = (
|
||||||
|
field_names.PERCENT_PREFIX
|
||||||
|
+ race_field_name
|
||||||
|
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX
|
||||||
|
)
|
||||||
|
self.final_race_fields.append(output_field_name)
|
||||||
|
self.df_all[output_field_name] = (
|
||||||
|
self.df_all[race_field_name]
|
||||||
|
/ self.df_all[self.TOTAL_RACE_POPULATION_FIELD_NAME]
|
||||||
|
)
|
||||||
|
|
||||||
# Reporting Missing Values
|
# Reporting Missing Values
|
||||||
for col in self.df_all.columns:
|
for col in self.df_all.columns:
|
||||||
missing_value_count = self.df_all[col].isnull().sum()
|
missing_value_count = self.df_all[col].isnull().sum()
|
||||||
|
@ -400,7 +509,7 @@ class CensusDecennialETL(ExtractTransformLoad):
|
||||||
self.PERCENTAGE_HOUSEHOLDS_BELOW_200_PERC_POVERTY_LEVEL_FIELD_NAME,
|
self.PERCENTAGE_HOUSEHOLDS_BELOW_200_PERC_POVERTY_LEVEL_FIELD_NAME,
|
||||||
self.PERCENTAGE_HIGH_SCHOOL_ED_FIELD_NAME,
|
self.PERCENTAGE_HIGH_SCHOOL_ED_FIELD_NAME,
|
||||||
self.UNEMPLOYMENT_FIELD_NAME,
|
self.UNEMPLOYMENT_FIELD_NAME,
|
||||||
]
|
] + self.final_race_fields
|
||||||
|
|
||||||
self.df_all[columns_to_include].to_csv(
|
self.df_all[columns_to_include].to_csv(
|
||||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
||||||
|
|
|
@ -1,9 +1,10 @@
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -21,14 +22,35 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
||||||
Full technical documents: https://www.diversitydatakids.org/sites/default/files/2020-02/ddk_coi2.0_technical_documentation_20200212.pdf.
|
Full technical documents: https://www.diversitydatakids.org/sites/default/files/2020-02/ddk_coi2.0_technical_documentation_20200212.pdf.
|
||||||
|
|
||||||
Github repo: https://github.com/diversitydatakids/COI/
|
Github repo: https://github.com/diversitydatakids/COI/
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# Metadata for the baseclass
|
||||||
|
NAME = "child_opportunity_index"
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
|
||||||
|
# Define these for easy code completion
|
||||||
|
EXTREME_HEAT_FIELD: str
|
||||||
|
HEALTHY_FOOD_FIELD: str
|
||||||
|
IMPENETRABLE_SURFACES_FIELD: str
|
||||||
|
READING_FIELD: str
|
||||||
|
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.COI_FILE_URL = (
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
"https://data.diversitydatakids.org/datastore/zip/f16fff12-b1e5-4f60-85d3-"
|
self.SOURCE_URL = (
|
||||||
"3a0ededa30a0?format=csv"
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
)
|
"child_opportunity_index/raw.zip"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.SOURCE_URL = (
|
||||||
|
"https://data.diversitydatakids.org/datastore/zip/f16fff12-b1e5-4f60-85d3-"
|
||||||
|
"3a0ededa30a0?format=csv"
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: Decide about nixing this
|
||||||
|
self.TRACT_INPUT_COLUMN_NAME = self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||||
|
|
||||||
self.OUTPUT_PATH: Path = (
|
self.OUTPUT_PATH: Path = (
|
||||||
self.DATA_PATH / "dataset" / "child_opportunity_index"
|
self.DATA_PATH / "dataset" / "child_opportunity_index"
|
||||||
|
@ -40,31 +62,19 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
||||||
self.IMPENETRABLE_SURFACES_INPUT_FIELD = "HE_GREEN"
|
self.IMPENETRABLE_SURFACES_INPUT_FIELD = "HE_GREEN"
|
||||||
self.READING_INPUT_FIELD = "ED_READING"
|
self.READING_INPUT_FIELD = "ED_READING"
|
||||||
|
|
||||||
# Constants for output
|
|
||||||
self.COLUMNS_TO_KEEP = [
|
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
|
||||||
field_names.EXTREME_HEAT_FIELD,
|
|
||||||
field_names.HEALTHY_FOOD_FIELD,
|
|
||||||
field_names.IMPENETRABLE_SURFACES_FIELD,
|
|
||||||
field_names.READING_FIELD,
|
|
||||||
]
|
|
||||||
|
|
||||||
self.raw_df: pd.DataFrame
|
|
||||||
self.output_df: pd.DataFrame
|
self.output_df: pd.DataFrame
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
logger.info("Starting 51MB data download.")
|
logger.info("Starting 51MB data download.")
|
||||||
|
super().extract(
|
||||||
unzip_file_from_url(
|
source_url=self.SOURCE_URL,
|
||||||
file_url=self.COI_FILE_URL,
|
extract_path=self.get_tmp_path(),
|
||||||
download_path=self.get_tmp_path(),
|
|
||||||
unzipped_file_path=self.get_tmp_path() / "child_opportunity_index",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
self.raw_df = pd.read_csv(
|
def transform(self) -> None:
|
||||||
filepath_or_buffer=self.get_tmp_path()
|
logger.info("Starting transforms.")
|
||||||
/ "child_opportunity_index"
|
raw_df = pd.read_csv(
|
||||||
/ "raw.csv",
|
filepath_or_buffer=self.get_tmp_path() / "raw.csv",
|
||||||
# The following need to remain as strings for all of their digits, not get
|
# The following need to remain as strings for all of their digits, not get
|
||||||
# converted to numbers.
|
# converted to numbers.
|
||||||
dtype={
|
dtype={
|
||||||
|
@ -73,16 +83,13 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def transform(self) -> None:
|
output_df = raw_df.rename(
|
||||||
logger.info("Starting transforms.")
|
|
||||||
|
|
||||||
output_df = self.raw_df.rename(
|
|
||||||
columns={
|
columns={
|
||||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||||
self.EXTREME_HEAT_INPUT_FIELD: field_names.EXTREME_HEAT_FIELD,
|
self.EXTREME_HEAT_INPUT_FIELD: self.EXTREME_HEAT_FIELD,
|
||||||
self.HEALTHY_FOOD_INPUT_FIELD: field_names.HEALTHY_FOOD_FIELD,
|
self.HEALTHY_FOOD_INPUT_FIELD: self.HEALTHY_FOOD_FIELD,
|
||||||
self.IMPENETRABLE_SURFACES_INPUT_FIELD: field_names.IMPENETRABLE_SURFACES_FIELD,
|
self.IMPENETRABLE_SURFACES_INPUT_FIELD: self.IMPENETRABLE_SURFACES_FIELD,
|
||||||
self.READING_INPUT_FIELD: field_names.READING_FIELD,
|
self.READING_INPUT_FIELD: self.READING_FIELD,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -95,8 +102,8 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
||||||
|
|
||||||
# Convert percents from 0-100 to 0-1 to standardize with our other fields.
|
# Convert percents from 0-100 to 0-1 to standardize with our other fields.
|
||||||
percent_fields_to_convert = [
|
percent_fields_to_convert = [
|
||||||
field_names.HEALTHY_FOOD_FIELD,
|
self.HEALTHY_FOOD_FIELD,
|
||||||
field_names.IMPENETRABLE_SURFACES_FIELD,
|
self.IMPENETRABLE_SURFACES_FIELD,
|
||||||
]
|
]
|
||||||
|
|
||||||
for percent_field_to_convert in percent_fields_to_convert:
|
for percent_field_to_convert in percent_fields_to_convert:
|
||||||
|
@ -105,11 +112,3 @@ class ChildOpportunityIndex(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
self.output_df = output_df
|
self.output_df = output_df
|
||||||
|
|
||||||
def load(self) -> None:
|
|
||||||
logger.info("Saving CSV")
|
|
||||||
|
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
self.output_df[self.COLUMNS_TO_KEEP].to_csv(
|
|
||||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
|
||||||
)
|
|
||||||
|
|
|
@ -1,64 +1,51 @@
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class DOEEnergyBurden(ExtractTransformLoad):
|
class DOEEnergyBurden(ExtractTransformLoad):
|
||||||
def __init__(self):
|
NAME = "doe_energy_burden"
|
||||||
self.DOE_FILE_URL = (
|
SOURCE_URL: str = (
|
||||||
settings.AWS_JUSTICE40_DATASOURCES_URL
|
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||||
+ "/DOE_LEAD_AMI_TRACT_2018_ALL.csv.zip"
|
+ "/DOE_LEAD_AMI_TRACT_2018_ALL.csv.zip"
|
||||||
)
|
)
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
|
||||||
|
REVISED_ENERGY_BURDEN_FIELD_NAME: str
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
self.OUTPUT_PATH: Path = (
|
self.OUTPUT_PATH: Path = (
|
||||||
self.DATA_PATH / "dataset" / "doe_energy_burden"
|
self.DATA_PATH / "dataset" / "doe_energy_burden"
|
||||||
)
|
)
|
||||||
|
|
||||||
self.TRACT_INPUT_COLUMN_NAME = "FIP"
|
|
||||||
self.INPUT_ENERGY_BURDEN_FIELD_NAME = "BURDEN"
|
self.INPUT_ENERGY_BURDEN_FIELD_NAME = "BURDEN"
|
||||||
self.REVISED_ENERGY_BURDEN_FIELD_NAME = "Energy burden"
|
|
||||||
|
|
||||||
# Constants for output
|
|
||||||
self.COLUMNS_TO_KEEP = [
|
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
|
||||||
self.REVISED_ENERGY_BURDEN_FIELD_NAME,
|
|
||||||
]
|
|
||||||
|
|
||||||
self.raw_df: pd.DataFrame
|
self.raw_df: pd.DataFrame
|
||||||
self.output_df: pd.DataFrame
|
self.output_df: pd.DataFrame
|
||||||
|
|
||||||
def extract(self) -> None:
|
def transform(self) -> None:
|
||||||
logger.info("Starting data download.")
|
logger.info("Starting DOE Energy Burden transforms.")
|
||||||
|
raw_df: pd.DataFrame = pd.read_csv(
|
||||||
unzip_file_from_url(
|
|
||||||
file_url=self.DOE_FILE_URL,
|
|
||||||
download_path=self.get_tmp_path(),
|
|
||||||
unzipped_file_path=self.get_tmp_path() / "doe_energy_burden",
|
|
||||||
)
|
|
||||||
|
|
||||||
self.raw_df = pd.read_csv(
|
|
||||||
filepath_or_buffer=self.get_tmp_path()
|
filepath_or_buffer=self.get_tmp_path()
|
||||||
/ "doe_energy_burden"
|
|
||||||
/ "DOE_LEAD_AMI_TRACT_2018_ALL.csv",
|
/ "DOE_LEAD_AMI_TRACT_2018_ALL.csv",
|
||||||
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
# The following need to remain as strings for all of their digits, not get converted to numbers.
|
||||||
dtype={
|
dtype={
|
||||||
self.TRACT_INPUT_COLUMN_NAME: "string",
|
self.INPUT_GEOID_TRACT_FIELD_NAME: "string",
|
||||||
},
|
},
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def transform(self) -> None:
|
logger.info("Renaming columns and ensuring output format is correct")
|
||||||
logger.info("Starting transforms.")
|
output_df = raw_df.rename(
|
||||||
|
|
||||||
output_df = self.raw_df.rename(
|
|
||||||
columns={
|
columns={
|
||||||
self.INPUT_ENERGY_BURDEN_FIELD_NAME: self.REVISED_ENERGY_BURDEN_FIELD_NAME,
|
self.INPUT_ENERGY_BURDEN_FIELD_NAME: self.REVISED_ENERGY_BURDEN_FIELD_NAME,
|
||||||
self.TRACT_INPUT_COLUMN_NAME: self.GEOID_TRACT_FIELD_NAME,
|
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -71,11 +58,3 @@ class DOEEnergyBurden(ExtractTransformLoad):
|
||||||
)
|
)
|
||||||
|
|
||||||
self.output_df = output_df
|
self.output_df = output_df
|
||||||
|
|
||||||
def load(self) -> None:
|
|
||||||
logger.info("Saving DOE Energy Burden CSV")
|
|
||||||
|
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
self.output_df[self.COLUMNS_TO_KEEP].to_csv(
|
|
||||||
path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False
|
|
||||||
)
|
|
||||||
|
|
|
@ -0,0 +1,16 @@
|
||||||
|
# DOT travel barriers
|
||||||
|
|
||||||
|
The below description is taken from DOT directly:
|
||||||
|
|
||||||
|
Consistent with OMB’s Interim Guidance for the Justice40 Initiative, DOT’s interim definition of DACs includes (a) certain qualifying census tracts, (b) any Tribal land, or (c) any territory or possession of the United States. DOT has provided a mapping tool to assist applicants in identifying whether a project is located in a Disadvantaged Community, available at Transportation Disadvantaged Census Tracts (arcgis.com). A shapefile of the geospatial data is available Transportation Disadvantaged Census Tracts shapefile (version 2 .0, posted 5/10/22).
|
||||||
|
|
||||||
|
The DOT interim definition for DACs was developed by an internal and external collaborative research process (see recordings from November 2021 public meetings). It includes data for 22 indicators collected at the census tract level and grouped into six (6) categories of transportation disadvantage. The numbers in parenthesis show how many indicators fall in that category:
|
||||||
|
|
||||||
|
- Transportation access disadvantage identifies communities and places that spend more, and take longer, to get where they need to go. (4)
|
||||||
|
- Health disadvantage identifies communities based on variables associated with adverse health outcomes, disability, as well as environmental exposures. (3)
|
||||||
|
- Environmental disadvantage identifies communities with disproportionately high levels of certain air pollutants and high potential presence of lead-based paint in housing units. (6)
|
||||||
|
- Economic disadvantage identifies areas and populations with high poverty, low wealth, lack of local jobs, low homeownership, low educational attainment, and high inequality. (7)
|
||||||
|
Resilience disadvantage identifies communities vulnerable to hazards caused by climate change. (1)
|
||||||
|
- Equity disadvantage identifies communities with a with a high percentile of persons (age 5+) who speak English "less than well." (1)
|
||||||
|
|
||||||
|
The CEJST uses only Transportation Access Disadvantage.
|
|
@ -0,0 +1,69 @@
|
||||||
|
# pylint: disable=unsubscriptable-object
|
||||||
|
# pylint: disable=unsupported-assignment-operation
|
||||||
|
import geopandas as gpd
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class TravelCompositeETL(ExtractTransformLoad):
|
||||||
|
"""ETL class for the DOT Travel Disadvantage Dataset"""
|
||||||
|
|
||||||
|
NAME = "travel_composite"
|
||||||
|
|
||||||
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
SOURCE_URL = (
|
||||||
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
|
"dot_travel_composite/Shapefile_and_Metadata.zip"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
SOURCE_URL = "https://www.transportation.gov/sites/dot.gov/files/Shapefile_and_Metadata.zip"
|
||||||
|
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
|
||||||
|
# Output score variables (values set on datasets.yml) for linting purposes
|
||||||
|
TRAVEL_BURDEN_FIELD_NAME: str
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# define the full path for the input CSV file
|
||||||
|
self.INPUT_SHP = (
|
||||||
|
self.get_tmp_path() / "DOT_Disadvantage_Layer_Final_April2022.shp"
|
||||||
|
)
|
||||||
|
|
||||||
|
# this is the main dataframe
|
||||||
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
# Start dataset-specific vars here
|
||||||
|
## Average of Transportation Indicator Percentiles (calculated)
|
||||||
|
## Calculated: Average of (EPL_TCB+EPL_NWKI+EPL_NOVEH+EPL_COMMUTE) excluding NULLS
|
||||||
|
## See metadata for more information
|
||||||
|
self.INPUT_TRAVEL_DISADVANTAGE_FIELD_NAME = "Transp_TH"
|
||||||
|
self.INPUT_GEOID_TRACT_FIELD_NAME = "FIPS"
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
"""Reads the unzipped data file into memory and applies the following
|
||||||
|
transformations to prepare it for the load() method:
|
||||||
|
|
||||||
|
- Renames the Census Tract column to match the other datasets
|
||||||
|
- Converts to CSV
|
||||||
|
"""
|
||||||
|
logger.info("Transforming DOT Travel Disadvantage Data")
|
||||||
|
|
||||||
|
# read in the unzipped shapefile from data source
|
||||||
|
# reformat it to be standard df, remove unassigned rows, and
|
||||||
|
# then rename the Census Tract column for merging
|
||||||
|
df_dot: pd.DataFrame = gpd.read_file(self.INPUT_SHP)
|
||||||
|
df_dot = df_dot.rename(
|
||||||
|
columns={
|
||||||
|
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
self.INPUT_TRAVEL_DISADVANTAGE_FIELD_NAME: self.TRAVEL_BURDEN_FIELD_NAME,
|
||||||
|
}
|
||||||
|
).dropna(subset=[self.GEOID_TRACT_FIELD_NAME])
|
||||||
|
# Assign the final df to the class' output_df for the load method
|
||||||
|
self.output_df = df_dot
|
|
@ -0,0 +1,40 @@
|
||||||
|
The following is the description from eAMLIS as of August 16, 2022.
|
||||||
|
---
|
||||||
|
|
||||||
|
e-AMLIS is not a comprehensive database of all AML features or all AML grant activities. e-AMLIS is a national inventory that provides information about known abandoned mine land (AML) features including polluted waters. The majority of the data in e-AMLIS provides information about known coal AML features for the 25 states and 3 tribal SMCRA-approved AML Programs. e-AMLIS also provides limited information on non-coal AML features, and, non-coal reclamation projects as well as AML features for states and tribes that do not have an approved AML Program. Additionally, e-AMLIS only accounts for the direct construction cost to reclaim each AML feature that has been identified by states and Tribes. Other project costs such as planning, design, permitting, and construction oversight are not tracked in e-AMLIS.
|
||||||
|
|
||||||
|
The figures in e-AMLIS are further broken down into 3 cost categories:
|
||||||
|
|
||||||
|
Unfunded Cost represents pre-construction estimates to reclaim the AML feature;
|
||||||
|
Funded Cost indicates that construction has been approved by OSM and these figures may change during construction;
|
||||||
|
Completed Cost is the actual cost to complete construction and reclamation of the AML feature.
|
||||||
|
DOI/OSMRE’s Financial Business & Management System is the system of record to obtain comprehensive information about all AML grant expenditures.
|
||||||
|
|
||||||
|
An inventory of land and water impacted by past mining (primarily coal mining) is maintained by OSMRE to provide information needed to implement the Surface Mining Control and Reclamation Act of 1977 (SMCRA). The inventory contains information on the location, type, and extent of AML impacts, as well as, information on the cost associated with the reclamation of those problems. The inventory is based upon field surveys by State, Tribal, and OSMRE program officials. It is dynamic to the extent that it is modified as new problems are identified and existing problems are reclaimed.
|
||||||
|
|
||||||
|
The Abandoned Mine Land Reclamation Act (AMRA) of 1990, amended SMCRA. The amended law expanded the scope of data OSMRE must collect regarding AML reclamation programs and progress. On December 20, 2006, SMCRA was amended under the Tax Relief and Health Care Act of 2006 to add sources of program funding, emphasize high priority coal reclamation, and expand OSMRE’s responsibilities towards implementation and management of the AML Inventory.
|
||||||
|
|
||||||
|
WHO MAINTAINS THE INFORMATION IN THE AML INVENTORY?
|
||||||
|
The information is developed and/or updated by the States and Indian Tribes managing their own AML programs under SMCRA or by the OSMRE office responsible for States and Indian Tribes not managing their own AML problems.
|
||||||
|
|
||||||
|
TYPES OF PROBLEMS
|
||||||
|
"High Priority"
|
||||||
|
The most serious AML problems are those posing a threat to health, safety and general welfare of people (Priority 1 and Priority 2, or "high priority"). These are the only problems which the law requires to be inventoried. There are 17 Priority 1 and 2 problem types.
|
||||||
|
|
||||||
|
Emergencies
|
||||||
|
Under the 2006 amendments to SMCRA, AML grants to states and tribes increased from $145 million in FY 2007 to $395 million in FY 2011. The increase in funding allowed states to take responsibility for their AML emergencies as part of their regular AML programs.
|
||||||
|
|
||||||
|
Until FY 2011, OSMRE provided Abandoned Mine Land (AML) State Emergency grants to the 15 states that manage their own emergency programs under the Abandoned Mine Land Reclamation Program. Thirteen other states and tribes that had approved AML programs did not receive emergency grants. OSMRE managed emergencies in those 13 states and tribes as well as in Federal Program States without AML programs.
|
||||||
|
|
||||||
|
OSMRE officially notified the state and tribal officials and Congressional delegations that, starting on October 1, 2010, they would fully assume responsibility for funding their emergency programs. OSMRE then worked with states and tribes to ensure a smooth transition to the states’ assumption of responsibility for administering state emergency programs. New funding and carryover balances were used during the transition to address immediate needs.
|
||||||
|
|
||||||
|
Overall, OSMRE successfully transitioned the financial responsibility to the states in FY 2011, and continues to provide technical and program assistance when needed. States with AML programs are now in a position to effectively handle emergency programs.
|
||||||
|
|
||||||
|
Environmental
|
||||||
|
AML problems impacting the environment are known as Priority 3 problems. While SMCRA does not require OSMRE to inventory every unreclaimed priority 3 problem, some program States and Indian tribes have chosen to submit such information. Information for priority 3 problem types is required when reclamation activities are funded and information on completed reclamation of priority 3 problems is kept in the inventory.
|
||||||
|
|
||||||
|
Other Coal Mine Related Problems
|
||||||
|
Information is also kept on lower priority coal related AML problems such as lower priority coal-related projects involving public facilities, and the development of publicly-owned land. The lower priority problems are also categorized-- Priority 4 and 5 problem types.
|
||||||
|
|
||||||
|
Non-coal Mine Related AML Problems
|
||||||
|
The non-coal problems are primarily problems reclaimed by States/Indian tribes that had "Certified" having addressed all known eligible coal related problems. States and Indian tribes managing their own AML programs reclaimed non-coal problems prior to addressing all their coal related problems under SMCRA SEC. 409-- FILLING VOIDS AND SEALING TUNNELS at the request of the Governor of the state or the governing body of the Indian tribe if the Secretary of the Department of the Interior determines such problems meet the criteria for a priority 1, extreme hazard, problems. This Program Area contains historical reclamation accomplishments for Certified Programs reclaiming Priority 1, 2, and 3 non-coal Problem Type features with pre-AML Reauthorization SMCRA funds distributed prior to October 1, 2007.
|
81
data/data-pipeline/data_pipeline/etl/sources/eamlis/etl.py
Normal file
81
data/data-pipeline/data_pipeline/etl/sources/eamlis/etl.py
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import geopandas as gpd
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.etl.sources.geo_utils import add_tracts_for_geometries
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class AbandonedMineETL(ExtractTransformLoad):
|
||||||
|
"""Data from Office Of Surface Mining Reclamation and Enforcement's
|
||||||
|
eAMLIS. These are the locations of abandoned mines.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Metadata for the baseclass
|
||||||
|
NAME = "eamlis"
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
AML_BOOLEAN: str
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
EXPECTED_MISSING_STATES = [
|
||||||
|
"10",
|
||||||
|
"11",
|
||||||
|
"12",
|
||||||
|
"15",
|
||||||
|
"23",
|
||||||
|
"27",
|
||||||
|
"31",
|
||||||
|
"33",
|
||||||
|
"34",
|
||||||
|
"36",
|
||||||
|
"45",
|
||||||
|
"50",
|
||||||
|
"55",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Define these for easy code completion
|
||||||
|
def __init__(self):
|
||||||
|
self.SOURCE_URL = (
|
||||||
|
settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||||
|
+ "/eAMLIS export of all data.tsv.zip"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.TRACT_INPUT_COLUMN_NAME = self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||||
|
|
||||||
|
self.OUTPUT_PATH: Path = (
|
||||||
|
self.DATA_PATH / "dataset" / "abandoned_mine_land_inventory_system"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.COLUMNS_TO_KEEP = [
|
||||||
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
self.AML_BOOLEAN,
|
||||||
|
]
|
||||||
|
|
||||||
|
self.output_df: pd.DataFrame
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
logger.info("Starting eAMLIS transforms.")
|
||||||
|
df = pd.read_csv(
|
||||||
|
self.get_tmp_path() / "eAMLIS export of all data.tsv",
|
||||||
|
sep="\t",
|
||||||
|
low_memory=False,
|
||||||
|
)
|
||||||
|
gdf = gpd.GeoDataFrame(
|
||||||
|
df,
|
||||||
|
geometry=gpd.points_from_xy(
|
||||||
|
x=df["Longitude"],
|
||||||
|
y=df["Latitude"],
|
||||||
|
),
|
||||||
|
crs="epsg:4326",
|
||||||
|
)
|
||||||
|
gdf = gdf.drop_duplicates(subset=["geometry"], keep="last")
|
||||||
|
gdf_tracts = add_tracts_for_geometries(gdf)
|
||||||
|
gdf_tracts = gdf_tracts.drop_duplicates(self.GEOID_TRACT_FIELD_NAME)
|
||||||
|
gdf_tracts[self.AML_BOOLEAN] = True
|
||||||
|
self.output_df = gdf_tracts[self.COLUMNS_TO_KEEP]
|
|
@ -1,6 +1,6 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
@ -8,21 +8,22 @@ logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class EJSCREENETL(ExtractTransformLoad):
|
class EJSCREENETL(ExtractTransformLoad):
|
||||||
"""Load EJSCREEN data.
|
"""Load updated EJSCREEN data."""
|
||||||
|
|
||||||
Data dictionary:
|
NAME = "ejscreen"
|
||||||
https://gaftp.epa.gov/EJSCREEN/2019/2019_EJSCREEN_columns_explained.csv
|
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||||
"""
|
INPUT_GEOID_TRACT_FIELD_NAME: str = "ID"
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.EJSCREEN_FTP_URL = "https://edap-arcgiscloud-data-commons.s3.amazonaws.com/EJSCREEN2020/EJSCREEN_Tract_2020_USPR.csv.zip"
|
self.EJSCREEN_FTP_URL = "https://gaftp.epa.gov/EJSCREEN/2021/EJSCREEN_2021_USPR_Tracts.csv.zip"
|
||||||
self.EJSCREEN_CSV = self.get_tmp_path() / "EJSCREEN_Tract_2020_USPR.csv"
|
self.EJSCREEN_CSV = (
|
||||||
self.CSV_PATH = self.DATA_PATH / "dataset" / "ejscreen_2019"
|
self.get_tmp_path() / "EJSCREEN_2021_USPR_Tracts.csv"
|
||||||
|
)
|
||||||
|
self.CSV_PATH = self.DATA_PATH / "dataset" / "ejscreen"
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
self.COLUMNS_TO_KEEP = [
|
self.COLUMNS_TO_KEEP = [
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
field_names.TOTAL_POP_FIELD,
|
|
||||||
# pylint: disable=duplicate-code
|
# pylint: disable=duplicate-code
|
||||||
field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
||||||
field_names.RESPIRATORY_HAZARD_FIELD,
|
field_names.RESPIRATORY_HAZARD_FIELD,
|
||||||
|
@ -39,6 +40,7 @@ class EJSCREENETL(ExtractTransformLoad):
|
||||||
field_names.OVER_64_FIELD,
|
field_names.OVER_64_FIELD,
|
||||||
field_names.UNDER_5_FIELD,
|
field_names.UNDER_5_FIELD,
|
||||||
field_names.LEAD_PAINT_FIELD,
|
field_names.LEAD_PAINT_FIELD,
|
||||||
|
field_names.UST_FIELD,
|
||||||
]
|
]
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
|
@ -53,19 +55,16 @@ class EJSCREENETL(ExtractTransformLoad):
|
||||||
logger.info("Transforming EJScreen Data")
|
logger.info("Transforming EJScreen Data")
|
||||||
self.df = pd.read_csv(
|
self.df = pd.read_csv(
|
||||||
self.EJSCREEN_CSV,
|
self.EJSCREEN_CSV,
|
||||||
dtype={"ID": "string"},
|
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||||
# EJSCREEN writes the word "None" for NA data.
|
# EJSCREEN writes the word "None" for NA data.
|
||||||
na_values=["None"],
|
na_values=["None"],
|
||||||
low_memory=False,
|
low_memory=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# rename ID to Tract ID
|
# rename ID to Tract ID
|
||||||
self.df.rename(
|
self.output_df = self.df.rename(
|
||||||
columns={
|
columns={
|
||||||
"ID": self.GEOID_TRACT_FIELD_NAME,
|
self.INPUT_GEOID_TRACT_FIELD_NAME: self.GEOID_TRACT_FIELD_NAME,
|
||||||
# Note: it is currently unorthodox to use `field_names` in an ETL class,
|
|
||||||
# but I think that's the direction we'd like to move all ETL classes. - LMB
|
|
||||||
"ACSTOTPOP": field_names.TOTAL_POP_FIELD,
|
|
||||||
"CANCER": field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
"CANCER": field_names.AIR_TOXICS_CANCER_RISK_FIELD,
|
||||||
"RESP": field_names.RESPIRATORY_HAZARD_FIELD,
|
"RESP": field_names.RESPIRATORY_HAZARD_FIELD,
|
||||||
"DSLPM": field_names.DIESEL_FIELD,
|
"DSLPM": field_names.DIESEL_FIELD,
|
||||||
|
@ -81,14 +80,6 @@ class EJSCREENETL(ExtractTransformLoad):
|
||||||
"OVER64PCT": field_names.OVER_64_FIELD,
|
"OVER64PCT": field_names.OVER_64_FIELD,
|
||||||
"UNDER5PCT": field_names.UNDER_5_FIELD,
|
"UNDER5PCT": field_names.UNDER_5_FIELD,
|
||||||
"PRE1960PCT": field_names.LEAD_PAINT_FIELD,
|
"PRE1960PCT": field_names.LEAD_PAINT_FIELD,
|
||||||
|
"UST": field_names.UST_FIELD, # added for 2021 update
|
||||||
},
|
},
|
||||||
inplace=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
def load(self) -> None:
|
|
||||||
logger.info("Saving EJScreen CSV")
|
|
||||||
# write nationwide csv
|
|
||||||
self.CSV_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
self.df[self.COLUMNS_TO_KEEP].to_csv(
|
|
||||||
self.CSV_PATH / "usa.csv", index=False
|
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
@ -58,7 +57,6 @@ class EJSCREENAreasOfConcernETL(ExtractTransformLoad):
|
||||||
|
|
||||||
# TO DO: As a one off we did all the processing in a separate Notebook
|
# TO DO: As a one off we did all the processing in a separate Notebook
|
||||||
# Can add here later for a future PR
|
# Can add here later for a future PR
|
||||||
pass
|
|
||||||
|
|
||||||
def load(self) -> None:
|
def load(self) -> None:
|
||||||
if self.ejscreen_areas_of_concern_data_exists():
|
if self.ejscreen_areas_of_concern_data_exists():
|
||||||
|
|
|
@ -1,10 +1,11 @@
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,11 @@
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -20,7 +22,17 @@ class EPARiskScreeningEnvironmentalIndicatorsETL(ExtractTransformLoad):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.AGGREGATED_RSEI_SCORE_FILE_URL = "http://abt-rsei.s3.amazonaws.com/microdata2019/census_agg/CensusMicroTracts2019_2019_aggregated.zip"
|
|
||||||
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
self.AGGREGATED_RSEI_SCORE_FILE_URL = (
|
||||||
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
|
"epa_rsei/CensusMicroTracts2019_2019_aggregated.zip"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.AGGREGATED_RSEI_SCORE_FILE_URL = (
|
||||||
|
"http://abt-rsei.s3.amazonaws.com/microdata2019/"
|
||||||
|
"census_agg/CensusMicroTracts2019_2019_aggregated.zip"
|
||||||
|
)
|
||||||
|
|
||||||
self.OUTPUT_PATH: Path = self.DATA_PATH / "dataset" / "epa_rsei"
|
self.OUTPUT_PATH: Path = self.DATA_PATH / "dataset" / "epa_rsei"
|
||||||
self.EPA_RSEI_SCORE_THRESHOLD_CUTOFF = 0.75
|
self.EPA_RSEI_SCORE_THRESHOLD_CUTOFF = 0.75
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
# FSF flood risk data
|
||||||
|
|
||||||
|
Flood risk computed as 1 in 100 year flood zone
|
|
@ -0,0 +1,86 @@
|
||||||
|
# pylint: disable=unsubscriptable-object
|
||||||
|
# pylint: disable=unsupported-assignment-operation
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class FloodRiskETL(ExtractTransformLoad):
|
||||||
|
"""ETL class for the First Street Foundation flood risk dataset"""
|
||||||
|
|
||||||
|
NAME = "fsf_flood_risk"
|
||||||
|
# These data were emailed to the J40 team while first street got
|
||||||
|
# their official data sharing channels setup.
|
||||||
|
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_flood.zip"
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
|
||||||
|
# Output score variables (values set on datasets.yml) for linting purposes
|
||||||
|
COUNT_PROPERTIES: str
|
||||||
|
PROPERTIES_AT_RISK_FROM_FLOODING_TODAY: str
|
||||||
|
PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS: str
|
||||||
|
SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY: str
|
||||||
|
SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS: str
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# define the full path for the input CSV file
|
||||||
|
self.INPUT_CSV = (
|
||||||
|
self.get_tmp_path() / "fsf_flood" / "flood-tract2010.csv"
|
||||||
|
)
|
||||||
|
|
||||||
|
# this is the main dataframe
|
||||||
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
# Start dataset-specific vars here
|
||||||
|
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME = "count_properties"
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_TODAY = "mid_depth_100_year00"
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_30_YEARS = "mid_depth_100_year30"
|
||||||
|
self.CLIP_PROPERTIES_COUNT = 250
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
"""Reads the unzipped data file into memory and applies the following
|
||||||
|
transformations to prepare it for the load() method:
|
||||||
|
|
||||||
|
- Renames the Census Tract column to match the other datasets
|
||||||
|
- Calculates share of properties at risk, left-clipping number of properties at 250
|
||||||
|
"""
|
||||||
|
logger.info("Transforming National Risk Index Data")
|
||||||
|
|
||||||
|
# read in the unzipped csv data source then rename the
|
||||||
|
# Census Tract column for merging
|
||||||
|
df_fsf_flood: pd.DataFrame = pd.read_csv(
|
||||||
|
self.INPUT_CSV,
|
||||||
|
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||||
|
low_memory=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
df_fsf_flood[self.GEOID_TRACT_FIELD_NAME] = df_fsf_flood[
|
||||||
|
self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||||
|
].str.zfill(11)
|
||||||
|
|
||||||
|
df_fsf_flood[self.COUNT_PROPERTIES] = df_fsf_flood[
|
||||||
|
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME
|
||||||
|
].clip(lower=self.CLIP_PROPERTIES_COUNT)
|
||||||
|
|
||||||
|
df_fsf_flood[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY] = (
|
||||||
|
df_fsf_flood[self.COUNT_PROPERTIES_AT_RISK_TODAY]
|
||||||
|
/ df_fsf_flood[self.COUNT_PROPERTIES]
|
||||||
|
)
|
||||||
|
df_fsf_flood[
|
||||||
|
self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS
|
||||||
|
] = (
|
||||||
|
df_fsf_flood[self.COUNT_PROPERTIES_AT_RISK_30_YEARS]
|
||||||
|
/ df_fsf_flood[self.COUNT_PROPERTIES]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assign the final df to the class' output_df for the load method with rename
|
||||||
|
self.output_df = df_fsf_flood.rename(
|
||||||
|
columns={
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_TODAY: self.PROPERTIES_AT_RISK_FROM_FLOODING_TODAY,
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_30_YEARS: self.PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS,
|
||||||
|
}
|
||||||
|
)
|
|
@ -0,0 +1,3 @@
|
||||||
|
# FSF wildfire risk data
|
||||||
|
|
||||||
|
Fire risk computed as >= 0.003 burn risk probability
|
|
@ -0,0 +1,83 @@
|
||||||
|
# pylint: disable=unsubscriptable-object
|
||||||
|
# pylint: disable=unsupported-assignment-operation
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class WildfireRiskETL(ExtractTransformLoad):
|
||||||
|
"""ETL class for the First Street Foundation wildfire risk dataset"""
|
||||||
|
|
||||||
|
NAME = "fsf_wildfire_risk"
|
||||||
|
# These data were emailed to the J40 team while first street got
|
||||||
|
# their official data sharing channels setup.
|
||||||
|
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/fsf_fire.zip"
|
||||||
|
GEO_LEVEL = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
LOAD_YAML_CONFIG: bool = True
|
||||||
|
ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False
|
||||||
|
|
||||||
|
# Output score variables (values set on datasets.yml) for linting purposes
|
||||||
|
COUNT_PROPERTIES: str
|
||||||
|
PROPERTIES_AT_RISK_FROM_FIRE_TODAY: str
|
||||||
|
PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS: str
|
||||||
|
SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY: str
|
||||||
|
SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS: str
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# define the full path for the input CSV file
|
||||||
|
self.INPUT_CSV = self.get_tmp_path() / "fsf_fire" / "fire-tract2010.csv"
|
||||||
|
|
||||||
|
# this is the main dataframe
|
||||||
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
# Start dataset-specific vars here
|
||||||
|
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME = "count_properties"
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_TODAY = "burnprob_year00_flag"
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_30_YEARS = "burnprob_year30_flag"
|
||||||
|
self.CLIP_PROPERTIES_COUNT = 250
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
"""Reads the unzipped data file into memory and applies the following
|
||||||
|
transformations to prepare it for the load() method:
|
||||||
|
|
||||||
|
- Renames the Census Tract column to match the other datasets
|
||||||
|
- Calculates share of properties at risk, left-clipping number of properties at 250
|
||||||
|
"""
|
||||||
|
logger.info("Transforming National Risk Index Data")
|
||||||
|
# read in the unzipped csv data source then rename the
|
||||||
|
# Census Tract column for merging
|
||||||
|
df_fsf_fire: pd.DataFrame = pd.read_csv(
|
||||||
|
self.INPUT_CSV,
|
||||||
|
dtype={self.INPUT_GEOID_TRACT_FIELD_NAME: str},
|
||||||
|
low_memory=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
df_fsf_fire[self.GEOID_TRACT_FIELD_NAME] = df_fsf_fire[
|
||||||
|
self.INPUT_GEOID_TRACT_FIELD_NAME
|
||||||
|
].str.zfill(11)
|
||||||
|
|
||||||
|
df_fsf_fire[self.COUNT_PROPERTIES] = df_fsf_fire[
|
||||||
|
self.COUNT_PROPERTIES_NATIVE_FIELD_NAME
|
||||||
|
].clip(lower=self.CLIP_PROPERTIES_COUNT)
|
||||||
|
|
||||||
|
df_fsf_fire[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY] = (
|
||||||
|
df_fsf_fire[self.COUNT_PROPERTIES_AT_RISK_TODAY]
|
||||||
|
/ df_fsf_fire[self.COUNT_PROPERTIES]
|
||||||
|
)
|
||||||
|
df_fsf_fire[self.SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS] = (
|
||||||
|
df_fsf_fire[self.COUNT_PROPERTIES_AT_RISK_30_YEARS]
|
||||||
|
/ df_fsf_fire[self.COUNT_PROPERTIES]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assign the final df to the class' output_df for the load method with rename
|
||||||
|
self.output_df = df_fsf_fire.rename(
|
||||||
|
columns={
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_TODAY: self.PROPERTIES_AT_RISK_FROM_FIRE_TODAY,
|
||||||
|
self.COUNT_PROPERTIES_AT_RISK_30_YEARS: self.PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS,
|
||||||
|
}
|
||||||
|
)
|
92
data/data-pipeline/data_pipeline/etl/sources/geo_utils.py
Normal file
92
data/data-pipeline/data_pipeline/etl/sources/geo_utils.py
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
"""Utililities for turning geographies into tracts, using census data"""
|
||||||
|
from functools import lru_cache
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
import geopandas as gpd
|
||||||
|
from data_pipeline.etl.sources.tribal.etl import TribalETL
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
from .census.etl import CensusETL
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def get_tract_geojson(
|
||||||
|
_tract_data_path: Optional[Path] = None,
|
||||||
|
) -> gpd.GeoDataFrame:
|
||||||
|
logger.info("Loading tract geometry data from census ETL")
|
||||||
|
GEOJSON_PATH = _tract_data_path
|
||||||
|
if GEOJSON_PATH is None:
|
||||||
|
GEOJSON_PATH = CensusETL.NATIONAL_TRACT_JSON_PATH
|
||||||
|
if not GEOJSON_PATH.exists():
|
||||||
|
logger.debug("Census data has not been computed, running")
|
||||||
|
census_etl = CensusETL()
|
||||||
|
census_etl.extract()
|
||||||
|
census_etl.transform()
|
||||||
|
census_etl.load()
|
||||||
|
tract_data = gpd.read_file(
|
||||||
|
GEOJSON_PATH,
|
||||||
|
include_fields=["GEOID10"],
|
||||||
|
)
|
||||||
|
tract_data = tract_data.rename(
|
||||||
|
columns={"GEOID10": "GEOID10_TRACT"}, errors="raise"
|
||||||
|
)
|
||||||
|
return tract_data
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def get_tribal_geojson(
|
||||||
|
_tribal_data_path: Optional[Path] = None,
|
||||||
|
) -> gpd.GeoDataFrame:
|
||||||
|
logger.info("Loading Tribal geometry data from Tribal ETL")
|
||||||
|
GEOJSON_PATH = _tribal_data_path
|
||||||
|
if GEOJSON_PATH is None:
|
||||||
|
GEOJSON_PATH = TribalETL().NATIONAL_TRIBAL_GEOJSON_PATH
|
||||||
|
if not GEOJSON_PATH.exists():
|
||||||
|
logger.debug("Tribal data has not been computed, running")
|
||||||
|
tribal_etl = TribalETL()
|
||||||
|
tribal_etl.extract()
|
||||||
|
tribal_etl.transform()
|
||||||
|
tribal_etl.load()
|
||||||
|
tribal_data = gpd.read_file(
|
||||||
|
GEOJSON_PATH,
|
||||||
|
)
|
||||||
|
return tribal_data
|
||||||
|
|
||||||
|
|
||||||
|
def add_tracts_for_geometries(
|
||||||
|
df: gpd.GeoDataFrame, tract_data: Optional[gpd.GeoDataFrame] = None
|
||||||
|
) -> gpd.GeoDataFrame:
|
||||||
|
"""Adds tract-geoids to dataframe df that contains spatial geometries
|
||||||
|
|
||||||
|
Depends on CensusETL for the geodata to do its conversion
|
||||||
|
|
||||||
|
Args:
|
||||||
|
df (GeoDataFrame): a geopandas GeoDataFrame with a point geometry column
|
||||||
|
tract_data (GeoDataFrame): optional override to directly pass a
|
||||||
|
geodataframe of the tract boundaries. Also helps simplify testing.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
GeoDataFrame: the above dataframe, with an additional GEOID10_TRACT column that
|
||||||
|
maps the points in DF to census tracts and a geometry column for later
|
||||||
|
spatial analysis
|
||||||
|
"""
|
||||||
|
logger.debug("Appending tract data to dataframe")
|
||||||
|
|
||||||
|
if tract_data is None:
|
||||||
|
tract_data = get_tract_geojson()
|
||||||
|
else:
|
||||||
|
logger.debug("Using existing tract data.")
|
||||||
|
|
||||||
|
assert (
|
||||||
|
tract_data.crs == df.crs
|
||||||
|
), f"Dataframe must be projected to {tract_data.crs}"
|
||||||
|
df = gpd.sjoin(
|
||||||
|
df,
|
||||||
|
tract_data[["GEOID10_TRACT", "geometry"]],
|
||||||
|
how="inner",
|
||||||
|
op="intersects",
|
||||||
|
)
|
||||||
|
return df
|
|
@ -1,16 +1,18 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import (
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
get_module_logger,
|
from data_pipeline.utils import get_module_logger
|
||||||
unzip_file_from_url,
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
)
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class GeoCorrETL(ExtractTransformLoad):
|
class GeoCorrETL(ExtractTransformLoad):
|
||||||
|
NAME = "geocorr"
|
||||||
|
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "geocorr"
|
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "geocorr"
|
||||||
|
|
||||||
|
@ -24,6 +26,10 @@ class GeoCorrETL(ExtractTransformLoad):
|
||||||
self.GEOCORR_PLACES_URL = "https://justice40-data.s3.amazonaws.com/data-sources/geocorr_urban_rural.csv.zip"
|
self.GEOCORR_PLACES_URL = "https://justice40-data.s3.amazonaws.com/data-sources/geocorr_urban_rural.csv.zip"
|
||||||
self.GEOCORR_GEOID_FIELD_NAME = "GEOID10_TRACT"
|
self.GEOCORR_GEOID_FIELD_NAME = "GEOID10_TRACT"
|
||||||
self.URBAN_HEURISTIC_FIELD_NAME = "Urban Heuristic Flag"
|
self.URBAN_HEURISTIC_FIELD_NAME = "Urban Heuristic Flag"
|
||||||
|
self.COLUMNS_TO_KEEP = [
|
||||||
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
self.URBAN_HEURISTIC_FIELD_NAME,
|
||||||
|
]
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
@ -35,13 +41,11 @@ class GeoCorrETL(ExtractTransformLoad):
|
||||||
file_url=settings.AWS_JUSTICE40_DATASOURCES_URL
|
file_url=settings.AWS_JUSTICE40_DATASOURCES_URL
|
||||||
+ "/geocorr_urban_rural.csv.zip",
|
+ "/geocorr_urban_rural.csv.zip",
|
||||||
download_path=self.get_tmp_path(),
|
download_path=self.get_tmp_path(),
|
||||||
unzipped_file_path=self.get_tmp_path() / "geocorr",
|
unzipped_file_path=self.get_tmp_path(),
|
||||||
)
|
)
|
||||||
|
|
||||||
self.df = pd.read_csv(
|
self.df = pd.read_csv(
|
||||||
filepath_or_buffer=self.get_tmp_path()
|
filepath_or_buffer=self.get_tmp_path() / "geocorr_urban_rural.csv",
|
||||||
/ "geocorr"
|
|
||||||
/ "geocorr_urban_rural.csv",
|
|
||||||
dtype={
|
dtype={
|
||||||
self.GEOCORR_GEOID_FIELD_NAME: "string",
|
self.GEOCORR_GEOID_FIELD_NAME: "string",
|
||||||
},
|
},
|
||||||
|
@ -50,22 +54,10 @@ class GeoCorrETL(ExtractTransformLoad):
|
||||||
|
|
||||||
def transform(self) -> None:
|
def transform(self) -> None:
|
||||||
logger.info("Starting GeoCorr Urban Rural Map transform")
|
logger.info("Starting GeoCorr Urban Rural Map transform")
|
||||||
|
# Put in logic from Jupyter Notebook transform when we switch in the hyperlink to Geocorr
|
||||||
|
|
||||||
self.df.rename(
|
self.output_df = self.df.rename(
|
||||||
columns={
|
columns={
|
||||||
"urban_heuristic_flag": self.URBAN_HEURISTIC_FIELD_NAME,
|
"urban_heuristic_flag": self.URBAN_HEURISTIC_FIELD_NAME,
|
||||||
},
|
},
|
||||||
inplace=True,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Put in logic from Jupyter Notebook transform when we switch in the hyperlink to Geocorr
|
|
||||||
|
|
||||||
def load(self) -> None:
|
|
||||||
logger.info("Saving GeoCorr Urban Rural Map Data")
|
|
||||||
|
|
||||||
# mkdir census
|
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
self.df.to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
|
||||||
|
|
|
@ -0,0 +1,70 @@
|
||||||
|
import pandas as pd
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class HistoricRedliningETL(ExtractTransformLoad):
|
||||||
|
NAME = "historic_redlining"
|
||||||
|
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
EXPECTED_MISSING_STATES = [
|
||||||
|
"10",
|
||||||
|
"11",
|
||||||
|
"16",
|
||||||
|
"23",
|
||||||
|
"30",
|
||||||
|
"32",
|
||||||
|
"35",
|
||||||
|
"38",
|
||||||
|
"46",
|
||||||
|
"50",
|
||||||
|
"56",
|
||||||
|
]
|
||||||
|
PUERTO_RICO_EXPECTED_IN_DATA = False
|
||||||
|
ALASKA_AND_HAWAII_EXPECTED_IN_DATA: bool = False
|
||||||
|
SOURCE_URL = settings.AWS_JUSTICE40_DATASOURCES_URL + "/HRS_2010.zip"
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.CSV_PATH = self.DATA_PATH / "dataset" / "historic_redlining"
|
||||||
|
|
||||||
|
self.HISTORIC_REDLINING_FILE_PATH = (
|
||||||
|
self.get_tmp_path() / "HRS_2010.xlsx"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.REDLINING_SCALAR = "Tract-level redlining score"
|
||||||
|
|
||||||
|
self.COLUMNS_TO_KEEP = [
|
||||||
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
self.REDLINING_SCALAR,
|
||||||
|
]
|
||||||
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
logger.info("Transforming Historic Redlining Data")
|
||||||
|
# this is obviously temporary
|
||||||
|
historic_redlining_data = pd.read_excel(
|
||||||
|
self.HISTORIC_REDLINING_FILE_PATH
|
||||||
|
)
|
||||||
|
historic_redlining_data[self.GEOID_TRACT_FIELD_NAME] = (
|
||||||
|
historic_redlining_data["GEOID10"].astype(str).str.zfill(11)
|
||||||
|
)
|
||||||
|
historic_redlining_data = historic_redlining_data.rename(
|
||||||
|
columns={"HRS2010": self.REDLINING_SCALAR}
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"{historic_redlining_data.columns}")
|
||||||
|
|
||||||
|
# Calculate lots of different score thresholds for convenience
|
||||||
|
for threshold in [3.25, 3.5, 3.75]:
|
||||||
|
historic_redlining_data[
|
||||||
|
f"{self.REDLINING_SCALAR} meets or exceeds {round(threshold, 2)}"
|
||||||
|
] = (historic_redlining_data[self.REDLINING_SCALAR] >= threshold)
|
||||||
|
## NOTE We add to columns to keep here
|
||||||
|
self.COLUMNS_TO_KEEP.append(
|
||||||
|
f"{self.REDLINING_SCALAR} meets or exceeds {round(threshold, 2)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.output_df = historic_redlining_data
|
|
@ -1,9 +1,9 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from pandas.errors import EmptyDataError
|
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
|
||||||
from data_pipeline.utils import get_module_logger, unzip_file_from_url
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.utils import unzip_file_from_url
|
||||||
|
from pandas.errors import EmptyDataError
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ class HousingTransportationETL(ExtractTransformLoad):
|
||||||
|
|
||||||
# New file name:
|
# New file name:
|
||||||
tmp_csv_file_path = (
|
tmp_csv_file_path = (
|
||||||
zip_file_dir / f"htaindex_data_tracts_{fips}.csv"
|
zip_file_dir / f"htaindex2019_data_tracts_{fips}.csv"
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -1,16 +1,28 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.etl.base import ValidGeoLevel
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class HudHousingETL(ExtractTransformLoad):
|
class HudHousingETL(ExtractTransformLoad):
|
||||||
|
NAME = "hud_housing"
|
||||||
|
GEO_LEVEL: ValidGeoLevel = ValidGeoLevel.CENSUS_TRACT
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.OUTPUT_PATH = self.DATA_PATH / "dataset" / "hud_housing"
|
|
||||||
self.GEOID_TRACT_FIELD_NAME = "GEOID10_TRACT"
|
self.GEOID_TRACT_FIELD_NAME = "GEOID10_TRACT"
|
||||||
self.HOUSING_FTP_URL = "https://www.huduser.gov/portal/datasets/cp/2014thru2018-140-csv.zip"
|
|
||||||
self.HOUSING_ZIP_FILE_DIR = self.get_tmp_path() / "hud_housing"
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
self.HOUSING_FTP_URL = (
|
||||||
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
|
"hud_housing/2014thru2018-140-csv.zip"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.HOUSING_FTP_URL = "https://www.huduser.gov/portal/datasets/cp/2014thru2018-140-csv.zip"
|
||||||
|
|
||||||
|
self.HOUSING_ZIP_FILE_DIR = self.get_tmp_path()
|
||||||
|
|
||||||
# We measure households earning less than 80% of HUD Area Median Family Income by county
|
# We measure households earning less than 80% of HUD Area Median Family Income by county
|
||||||
# and paying greater than 30% of their income to housing costs.
|
# and paying greater than 30% of their income to housing costs.
|
||||||
|
@ -19,6 +31,17 @@ class HudHousingETL(ExtractTransformLoad):
|
||||||
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME = (
|
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME = (
|
||||||
"HOUSING_BURDEN_DENOMINATOR"
|
"HOUSING_BURDEN_DENOMINATOR"
|
||||||
)
|
)
|
||||||
|
self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME = (
|
||||||
|
"Share of homes with no kitchen or indoor plumbing (percent)"
|
||||||
|
)
|
||||||
|
self.COLUMNS_TO_KEEP = [
|
||||||
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
self.HOUSING_BURDEN_NUMERATOR_FIELD_NAME,
|
||||||
|
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME,
|
||||||
|
self.HOUSING_BURDEN_FIELD_NAME,
|
||||||
|
self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME,
|
||||||
|
"DENOM INCL NOT COMPUTED",
|
||||||
|
]
|
||||||
|
|
||||||
# Note: some variable definitions.
|
# Note: some variable definitions.
|
||||||
# HUD-adjusted median family income (HAMFI).
|
# HUD-adjusted median family income (HAMFI).
|
||||||
|
@ -27,7 +50,8 @@ class HudHousingETL(ExtractTransformLoad):
|
||||||
# - incomplete plumbing facilities,
|
# - incomplete plumbing facilities,
|
||||||
# - more than 1 person per room,
|
# - more than 1 person per room,
|
||||||
# - cost burden greater than 30%.
|
# - cost burden greater than 30%.
|
||||||
# Table 8 is the desired table.
|
# Table 8 is the desired table for housing burden
|
||||||
|
# Table 3 is the desired table for no kitchen or indoor plumbing
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
|
||||||
|
@ -38,124 +62,74 @@ class HudHousingETL(ExtractTransformLoad):
|
||||||
self.HOUSING_ZIP_FILE_DIR,
|
self.HOUSING_ZIP_FILE_DIR,
|
||||||
)
|
)
|
||||||
|
|
||||||
def transform(self) -> None:
|
def _read_chas_table(self, file_name):
|
||||||
logger.info("Transforming HUD Housing Data")
|
|
||||||
|
|
||||||
# New file name:
|
# New file name:
|
||||||
tmp_csv_file_path = self.HOUSING_ZIP_FILE_DIR / "140" / "Table8.csv"
|
tmp_csv_file_path = self.HOUSING_ZIP_FILE_DIR / "140" / file_name
|
||||||
self.df = pd.read_csv(
|
tmp_df = pd.read_csv(
|
||||||
filepath_or_buffer=tmp_csv_file_path,
|
filepath_or_buffer=tmp_csv_file_path,
|
||||||
encoding="latin-1",
|
encoding="latin-1",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Rename and reformat block group ID
|
|
||||||
self.df.rename(
|
|
||||||
columns={"geoid": self.GEOID_TRACT_FIELD_NAME}, inplace=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# The CHAS data has census tract ids such as `14000US01001020100`
|
# The CHAS data has census tract ids such as `14000US01001020100`
|
||||||
# Whereas the rest of our data uses, for the same tract, `01001020100`.
|
# Whereas the rest of our data uses, for the same tract, `01001020100`.
|
||||||
# the characters before `US`:
|
# This reformats and renames this field.
|
||||||
self.df[self.GEOID_TRACT_FIELD_NAME] = self.df[
|
tmp_df[self.GEOID_TRACT_FIELD_NAME] = tmp_df["geoid"].str.replace(
|
||||||
self.GEOID_TRACT_FIELD_NAME
|
r"^.*?US", "", regex=True
|
||||||
].str.replace(r"^.*?US", "", regex=True)
|
)
|
||||||
|
|
||||||
|
return tmp_df
|
||||||
|
|
||||||
|
def transform(self) -> None:
|
||||||
|
logger.info("Transforming HUD Housing Data")
|
||||||
|
|
||||||
|
table_8 = self._read_chas_table("Table8.csv")
|
||||||
|
table_3 = self._read_chas_table("Table3.csv")
|
||||||
|
|
||||||
|
self.df = table_8.merge(
|
||||||
|
table_3, how="outer", on=self.GEOID_TRACT_FIELD_NAME
|
||||||
|
)
|
||||||
|
|
||||||
|
# Calculate share that lacks indoor plumbing or kitchen
|
||||||
|
# This is computed as
|
||||||
|
# (
|
||||||
|
# owner occupied without plumbing + renter occupied without plumbing
|
||||||
|
# ) / (
|
||||||
|
# total of owner and renter occupied
|
||||||
|
# )
|
||||||
|
self.df[self.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD_NAME] = (
|
||||||
|
# T3_est3: owner-occupied lacking complete plumbing or kitchen facilities for all levels of income
|
||||||
|
# T3_est46: subtotal: renter-occupied lacking complete plumbing or kitchen facilities for all levels of income
|
||||||
|
# T3_est2: subtotal: owner-occupied for all levels of income
|
||||||
|
# T3_est45: subtotal: renter-occupied for all levels of income
|
||||||
|
self.df["T3_est3"]
|
||||||
|
+ self.df["T3_est46"]
|
||||||
|
) / (self.df["T3_est2"] + self.df["T3_est45"])
|
||||||
|
|
||||||
# Calculate housing burden
|
# Calculate housing burden
|
||||||
# This is quite a number of steps. It does not appear to be accessible nationally in a simpler format, though.
|
|
||||||
# See "CHAS data dictionary 12-16.xlsx"
|
# See "CHAS data dictionary 12-16.xlsx"
|
||||||
|
|
||||||
# Owner occupied numerator fields
|
# Owner occupied numerator fields
|
||||||
OWNER_OCCUPIED_NUMERATOR_FIELDS = [
|
OWNER_OCCUPIED_NUMERATOR_FIELDS = [
|
||||||
# Column Name
|
"T8_est7", # Owner, less than or equal to 30% of HAMFI, greater than 30% but less than or equal to 50%
|
||||||
# Line_Type
|
"T8_est10", # Owner, less than or equal to 30% of HAMFI, greater than 50%
|
||||||
# Tenure
|
"T8_est20", # Owner, greater than 30% but less than or equal to 50% of HAMFI, greater than 30% but less than or equal to 50%
|
||||||
# Household income
|
"T8_est23", # Owner, greater than 30% but less than or equal to 50% of HAMFI, greater than 50%
|
||||||
# Cost burden
|
"T8_est33", # Owner, greater than 50% but less than or equal to 80% of HAMFI, greater than 30% but less than or equal to 50%
|
||||||
# Facilities
|
"T8_est36", # Owner, greater than 50% but less than or equal to 80% of HAMFI, greater than 50%
|
||||||
"T8_est7",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# less than or equal to 30% of HAMFI
|
|
||||||
# greater than 30% but less than or equal to 50%
|
|
||||||
# All
|
|
||||||
"T8_est10",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# less than or equal to 30% of HAMFI
|
|
||||||
# greater than 50%
|
|
||||||
# All
|
|
||||||
"T8_est20",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 30% but less than or equal to 50% of HAMFI
|
|
||||||
# greater than 30% but less than or equal to 50%
|
|
||||||
# All
|
|
||||||
"T8_est23",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 30% but less than or equal to 50% of HAMFI
|
|
||||||
# greater than 50%
|
|
||||||
# All
|
|
||||||
"T8_est33",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 50% but less than or equal to 80% of HAMFI
|
|
||||||
# greater than 30% but less than or equal to 50%
|
|
||||||
# All
|
|
||||||
"T8_est36",
|
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 50% but less than or equal to 80% of HAMFI
|
|
||||||
# greater than 50%
|
|
||||||
# All
|
|
||||||
]
|
]
|
||||||
|
|
||||||
# These rows have the values where HAMFI was not computed, b/c of no or negative income.
|
# These rows have the values where HAMFI was not computed, b/c of no or negative income.
|
||||||
|
# They are in the same order as the rows above
|
||||||
OWNER_OCCUPIED_NOT_COMPUTED_FIELDS = [
|
OWNER_OCCUPIED_NOT_COMPUTED_FIELDS = [
|
||||||
# Column Name
|
|
||||||
# Line_Type
|
|
||||||
# Tenure
|
|
||||||
# Household income
|
|
||||||
# Cost burden
|
|
||||||
# Facilities
|
|
||||||
"T8_est13",
|
"T8_est13",
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# less than or equal to 30% of HAMFI
|
|
||||||
# not computed (no/negative income)
|
|
||||||
# All
|
|
||||||
"T8_est26",
|
"T8_est26",
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 30% but less than or equal to 50% of HAMFI
|
|
||||||
# not computed (no/negative income)
|
|
||||||
# All
|
|
||||||
"T8_est39",
|
"T8_est39",
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 50% but less than or equal to 80% of HAMFI
|
|
||||||
# not computed (no/negative income)
|
|
||||||
# All
|
|
||||||
"T8_est52",
|
"T8_est52",
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 80% but less than or equal to 100% of HAMFI
|
|
||||||
# not computed (no/negative income)
|
|
||||||
# All
|
|
||||||
"T8_est65",
|
"T8_est65",
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# greater than 100% of HAMFI
|
|
||||||
# not computed (no/negative income)
|
|
||||||
# All
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# This represents all owner-occupied housing units
|
||||||
OWNER_OCCUPIED_POPULATION_FIELD = "T8_est2"
|
OWNER_OCCUPIED_POPULATION_FIELD = "T8_est2"
|
||||||
# Subtotal
|
|
||||||
# Owner occupied
|
|
||||||
# All
|
|
||||||
# All
|
|
||||||
# All
|
|
||||||
|
|
||||||
# Renter occupied numerator fields
|
# Renter occupied numerator fields
|
||||||
RENTER_OCCUPIED_NUMERATOR_FIELDS = [
|
RENTER_OCCUPIED_NUMERATOR_FIELDS = [
|
||||||
|
@ -280,18 +254,4 @@ class HudHousingETL(ExtractTransformLoad):
|
||||||
float
|
float
|
||||||
)
|
)
|
||||||
|
|
||||||
def load(self) -> None:
|
self.output_df = self.df
|
||||||
logger.info("Saving HUD Housing Data")
|
|
||||||
|
|
||||||
self.OUTPUT_PATH.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Drop unnecessary fields
|
|
||||||
self.df[
|
|
||||||
[
|
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
|
||||||
self.HOUSING_BURDEN_NUMERATOR_FIELD_NAME,
|
|
||||||
self.HOUSING_BURDEN_DENOMINATOR_FIELD_NAME,
|
|
||||||
self.HOUSING_BURDEN_FIELD_NAME,
|
|
||||||
"DENOM INCL NOT COMPUTED",
|
|
||||||
]
|
|
||||||
].to_csv(path_or_buf=self.OUTPUT_PATH / "usa.csv", index=False)
|
|
||||||
|
|
|
@ -1,16 +1,27 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
import requests
|
import requests
|
||||||
|
from data_pipeline.config import settings
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.utils import get_module_logger
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class HudRecapETL(ExtractTransformLoad):
|
class HudRecapETL(ExtractTransformLoad):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
# pylint: disable=line-too-long
|
|
||||||
self.HUD_RECAP_CSV_URL = "https://opendata.arcgis.com/api/v3/datasets/56de4edea8264fe5a344da9811ef5d6e_0/downloads/data?format=csv&spatialRefId=4326" # noqa: E501
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
|
self.HUD_RECAP_CSV_URL = (
|
||||||
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
|
"hud_recap/Racially_or_Ethnically_Concentrated_Areas_of_Poverty__R_ECAPs_.csv"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.HUD_RECAP_CSV_URL = (
|
||||||
|
"https://opendata.arcgis.com/api/v3/datasets/"
|
||||||
|
"56de4edea8264fe5a344da9811ef5d6e_0/downloads/data?format=csv&spatialRefId=4326"
|
||||||
|
)
|
||||||
|
|
||||||
self.HUD_RECAP_CSV = (
|
self.HUD_RECAP_CSV = (
|
||||||
self.get_tmp_path()
|
self.get_tmp_path()
|
||||||
/ "Racially_or_Ethnically_Concentrated_Areas_of_Poverty__R_ECAPs_.csv"
|
/ "Racially_or_Ethnically_Concentrated_Areas_of_Poverty__R_ECAPs_.csv"
|
||||||
|
@ -26,7 +37,11 @@ class HudRecapETL(ExtractTransformLoad):
|
||||||
|
|
||||||
def extract(self) -> None:
|
def extract(self) -> None:
|
||||||
logger.info("Downloading HUD Recap Data")
|
logger.info("Downloading HUD Recap Data")
|
||||||
download = requests.get(self.HUD_RECAP_CSV_URL, verify=None)
|
download = requests.get(
|
||||||
|
self.HUD_RECAP_CSV_URL,
|
||||||
|
verify=None,
|
||||||
|
timeout=settings.REQUESTS_DEFAULT_TIMOUT,
|
||||||
|
)
|
||||||
file_contents = download.content
|
file_contents = download.content
|
||||||
csv_file = open(self.HUD_RECAP_CSV, "wb")
|
csv_file = open(self.HUD_RECAP_CSV, "wb")
|
||||||
csv_file.write(file_contents)
|
csv_file.write(file_contents)
|
||||||
|
|
|
@ -1,10 +1,9 @@
|
||||||
import pandas as pd
|
|
||||||
import geopandas as gpd
|
import geopandas as gpd
|
||||||
|
import pandas as pd
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -96,4 +95,3 @@ class MappingForEJETL(ExtractTransformLoad):
|
||||||
|
|
||||||
def validate(self) -> None:
|
def validate(self) -> None:
|
||||||
logger.info("Validating Mapping For EJ Data")
|
logger.info("Validating Mapping For EJ Data")
|
||||||
pass
|
|
||||||
|
|
|
@ -37,4 +37,4 @@ Oklahoma City,90R,D
|
||||||
Milwaukee Co.,S-D1,D
|
Milwaukee Co.,S-D1,D
|
||||||
Milwaukee Co.,S-D2,D
|
Milwaukee Co.,S-D2,D
|
||||||
Milwaukee Co.,S-D3,D
|
Milwaukee Co.,S-D3,D
|
||||||
Milwaukee Co.,S-D4,D
|
Milwaukee Co.,S-D4,D
|
||||||
|
|
|
|
@ -1,10 +1,12 @@
|
||||||
import pathlib
|
import pathlib
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
from data_pipeline.score import field_names
|
from data_pipeline.score import field_names
|
||||||
from data_pipeline.utils import download_file_from_url, get_module_logger
|
from data_pipeline.utils import download_file_from_url
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
from data_pipeline.config import settings
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
@ -21,10 +23,16 @@ class MappingInequalityETL(ExtractTransformLoad):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.MAPPING_INEQUALITY_CSV_URL = (
|
if settings.DATASOURCE_RETRIEVAL_FROM_AWS:
|
||||||
"https://raw.githubusercontent.com/americanpanorama/Census_HOLC_Research/"
|
self.MAPPING_INEQUALITY_CSV_URL = (
|
||||||
"main/2010_Census_Tracts/holc_tract_lookup.csv"
|
f"{settings.AWS_JUSTICE40_DATASOURCES_URL}/raw-data-sources/"
|
||||||
)
|
"mapping_inequality/holc_tract_lookup.csv"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.MAPPING_INEQUALITY_CSV_URL = (
|
||||||
|
"https://raw.githubusercontent.com/americanpanorama/Census_HOLC_Research/"
|
||||||
|
"main/2010_Census_Tracts/holc_tract_lookup.csv"
|
||||||
|
)
|
||||||
self.MAPPING_INEQUALITY_CSV = (
|
self.MAPPING_INEQUALITY_CSV = (
|
||||||
self.get_tmp_path() / "holc_tract_lookup.csv"
|
self.get_tmp_path() / "holc_tract_lookup.csv"
|
||||||
)
|
)
|
||||||
|
@ -47,16 +55,21 @@ class MappingInequalityETL(ExtractTransformLoad):
|
||||||
self.HOLC_GRADE_AND_ID_FIELD: str = "holc_id"
|
self.HOLC_GRADE_AND_ID_FIELD: str = "holc_id"
|
||||||
self.CITY_INPUT_FIELD: str = "city"
|
self.CITY_INPUT_FIELD: str = "city"
|
||||||
|
|
||||||
self.HOLC_GRADE_D_FIELD: str = "HOLC Grade D"
|
self.HOLC_GRADE_D_FIELD: str = "HOLC Grade D (hazardous)"
|
||||||
|
self.HOLC_GRADE_C_FIELD: str = "HOLC Grade C (declining)"
|
||||||
self.HOLC_GRADE_MANUAL_FIELD: str = "HOLC Grade (manually mapped)"
|
self.HOLC_GRADE_MANUAL_FIELD: str = "HOLC Grade (manually mapped)"
|
||||||
self.HOLC_GRADE_DERIVED_FIELD: str = "HOLC Grade (derived)"
|
self.HOLC_GRADE_DERIVED_FIELD: str = "HOLC Grade (derived)"
|
||||||
|
|
||||||
self.COLUMNS_TO_KEEP = [
|
self.COLUMNS_TO_KEEP = [
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
self.GEOID_TRACT_FIELD_NAME,
|
||||||
|
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||||
|
field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD,
|
||||||
|
field_names.HOLC_GRADE_C_OR_D_TRACT_50_PERCENT_FIELD,
|
||||||
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||||
field_names.HOLC_GRADE_D_TRACT_20_PERCENT_FIELD,
|
field_names.HOLC_GRADE_D_TRACT_20_PERCENT_FIELD,
|
||||||
field_names.HOLC_GRADE_D_TRACT_50_PERCENT_FIELD,
|
field_names.HOLC_GRADE_D_TRACT_50_PERCENT_FIELD,
|
||||||
field_names.HOLC_GRADE_D_TRACT_75_PERCENT_FIELD,
|
field_names.HOLC_GRADE_D_TRACT_75_PERCENT_FIELD,
|
||||||
|
field_names.REDLINED_SHARE,
|
||||||
]
|
]
|
||||||
|
|
||||||
self.df: pd.DataFrame
|
self.df: pd.DataFrame
|
||||||
|
@ -113,34 +126,58 @@ class MappingInequalityETL(ExtractTransformLoad):
|
||||||
how="left",
|
how="left",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create a single field that combines the 'derived' grade D field with the
|
# Create a single field that combines the 'derived' grade C and D fields with the
|
||||||
# manually mapped grade D field into a single grade D field.
|
# manually mapped grade C and D field into a single grade C and D field.
|
||||||
merged_df[self.HOLC_GRADE_D_FIELD] = np.where(
|
## Note: there are no manually derived C tracts at the moment
|
||||||
(merged_df[self.HOLC_GRADE_DERIVED_FIELD] == "D")
|
|
||||||
| (merged_df[self.HOLC_GRADE_MANUAL_FIELD] == "D"),
|
|
||||||
True,
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Start grouping by, to sum all of the grade D parts of each tract.
|
for grade, field_name in [
|
||||||
grouped_df = (
|
("C", self.HOLC_GRADE_C_FIELD),
|
||||||
merged_df.groupby(
|
("D", self.HOLC_GRADE_D_FIELD),
|
||||||
by=[
|
]:
|
||||||
self.GEOID_TRACT_FIELD_NAME,
|
merged_df[field_name] = np.where(
|
||||||
self.HOLC_GRADE_D_FIELD,
|
(merged_df[self.HOLC_GRADE_DERIVED_FIELD] == grade)
|
||||||
],
|
| (merged_df[self.HOLC_GRADE_MANUAL_FIELD] == grade),
|
||||||
# Keep the nulls, so we know the non-D proportion.
|
True,
|
||||||
dropna=False,
|
None,
|
||||||
)[self.TRACT_PROPORTION_FIELD]
|
)
|
||||||
|
|
||||||
|
redlined_dataframes_list = [
|
||||||
|
merged_df[merged_df[field].fillna(False)]
|
||||||
|
.groupby(self.GEOID_TRACT_FIELD_NAME)[self.TRACT_PROPORTION_FIELD]
|
||||||
.sum()
|
.sum()
|
||||||
|
.rename(new_name)
|
||||||
|
for field, new_name in [
|
||||||
|
(
|
||||||
|
self.HOLC_GRADE_D_FIELD,
|
||||||
|
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self.HOLC_GRADE_C_FIELD,
|
||||||
|
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
# Group by tract ID to get tract proportions of just C or just D
|
||||||
|
# This produces a single row per tract
|
||||||
|
grouped_df = (
|
||||||
|
pd.concat(
|
||||||
|
redlined_dataframes_list,
|
||||||
|
axis=1,
|
||||||
|
)
|
||||||
|
.fillna(0)
|
||||||
.reset_index()
|
.reset_index()
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create a field that is only the percent that is grade D.
|
grouped_df[
|
||||||
grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] = np.where(
|
field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD
|
||||||
grouped_df[self.HOLC_GRADE_D_FIELD],
|
] = grouped_df[
|
||||||
grouped_df[self.TRACT_PROPORTION_FIELD],
|
[
|
||||||
0,
|
field_names.HOLC_GRADE_C_TRACT_PERCENT_FIELD,
|
||||||
|
field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD,
|
||||||
|
]
|
||||||
|
].sum(
|
||||||
|
axis=1
|
||||||
)
|
)
|
||||||
|
|
||||||
# Calculate some specific threshold cutoffs, for convenience.
|
# Calculate some specific threshold cutoffs, for convenience.
|
||||||
|
@ -154,15 +191,14 @@ class MappingInequalityETL(ExtractTransformLoad):
|
||||||
grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] > 0.75
|
grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] > 0.75
|
||||||
)
|
)
|
||||||
|
|
||||||
# Drop the non-True values of `self.HOLC_GRADE_D_FIELD` -- we only
|
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_50_PERCENT_FIELD] = (
|
||||||
# want one row per tract for future joins.
|
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD] > 0.5
|
||||||
# Note this means not all tracts will be in this data.
|
)
|
||||||
# Note: this singleton comparison warning may be a pylint bug:
|
|
||||||
# https://stackoverflow.com/questions/51657715/pylint-pandas-comparison-to-true-should-be-just-expr-or-expr-is-true-sin#comment90876517_51657715
|
# Create the indicator we will use
|
||||||
# pylint: disable=singleton-comparison
|
grouped_df[field_names.REDLINED_SHARE] = (
|
||||||
grouped_df = grouped_df[
|
grouped_df[field_names.HOLC_GRADE_C_OR_D_TRACT_PERCENT_FIELD] > 0.5
|
||||||
grouped_df[self.HOLC_GRADE_D_FIELD] == True # noqa: E712
|
) & (grouped_df[field_names.HOLC_GRADE_D_TRACT_PERCENT_FIELD] > 0)
|
||||||
]
|
|
||||||
|
|
||||||
# Sort for convenience.
|
# Sort for convenience.
|
||||||
grouped_df.sort_values(by=self.GEOID_TRACT_FIELD_NAME, inplace=True)
|
grouped_df.sort_values(by=self.GEOID_TRACT_FIELD_NAME, inplace=True)
|
||||||
|
|
|
@ -8,7 +8,7 @@ According to the documentation:
|
||||||
|
|
||||||
There exist two data categories: Population Burden and Population Characteristics.
|
There exist two data categories: Population Burden and Population Characteristics.
|
||||||
|
|
||||||
There are two indicators within Population Burden: Exposure, and Socioeconomic. Within Population Characteristics, there exist two indicators: Sensitive, Environmental Effects. Each respective indicator contains several relevant covariates, and an averaged score.
|
There are two indicators within Population Burden: Exposure, and Socioeconomic. Within Population Characteristics, there exist two indicators: Sensitive, Environmental Effects. Each respective indicator contains several relevant covariates, and an averaged score.
|
||||||
|
|
||||||
The two "Pollution Burden" average scores are then averaged together and the result is multiplied by the average of the "Population Characteristics" categories to get the total EJ Score for each tract.
|
The two "Pollution Burden" average scores are then averaged together and the result is multiplied by the average of the "Population Characteristics" categories to get the total EJ Score for each tract.
|
||||||
|
|
||||||
|
@ -20,4 +20,4 @@ Furthermore, it was determined that Bladensburg residents are at a higher risk o
|
||||||
|
|
||||||
Source:
|
Source:
|
||||||
|
|
||||||
Driver, A.; Mehdizadeh, C.; Bara-Garcia, S.; Bodenreider, C.; Lewis, J.; Wilson, S. Utilization of the Maryland Environmental Justice Screening Tool: A Bladensburg, Maryland Case Study. Int. J. Environ. Res. Public Health 2019, 16, 348.
|
Driver, A.; Mehdizadeh, C.; Bara-Garcia, S.; Bodenreider, C.; Lewis, J.; Wilson, S. Utilization of the Maryland Environmental Justice Screening Tool: A Bladensburg, Maryland Case Study. Int. J. Environ. Res. Public Health 2019, 16, 348.
|
||||||
|
|
|
@ -1,11 +1,11 @@
|
||||||
from glob import glob
|
from glob import glob
|
||||||
|
|
||||||
import geopandas as gpd
|
import geopandas as gpd
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
# Michigan EJSCREEN
|
# Michigan EJSCREEN
|
||||||
|
|
||||||
<!-- markdown-link-check-disable -->
|
<!-- markdown-link-check-disable -->
|
||||||
The Michigan EJSCREEN description and publication can be found [here](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/149105/AssessingtheStateofEnvironmentalJusticeinMichigan_344.pdf).
|
The Michigan EJSCREEN description and publication can be found [here](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/149105/AssessingtheStateofEnvironmentalJusticeinMichigan_344.pdf).
|
||||||
<!-- markdown-link-check-enable-->
|
<!-- markdown-link-check-enable-->
|
||||||
|
@ -30,4 +29,4 @@ Sources:
|
||||||
* Minnesota Pollution Control Agency. (2015, December 15). Environmental Justice Framework Report.
|
* Minnesota Pollution Control Agency. (2015, December 15). Environmental Justice Framework Report.
|
||||||
Retrieved from https://www.pca.state.mn.us/sites/default/files/p-gen5-05.pdf.
|
Retrieved from https://www.pca.state.mn.us/sites/default/files/p-gen5-05.pdf.
|
||||||
|
|
||||||
* Faust, J., L. August, K. Bangia, V. Galaviz, J. Leichty, S. Prasad… and L. Zeise. (2017, January). Update to the California Communities Environmental Health Screening Tool CalEnviroScreen 3.0. Retrieved from OEHHA website: https://oehha.ca.gov/media/downloads/calenviroscreen/report/ces3report.pdf
|
* Faust, J., L. August, K. Bangia, V. Galaviz, J. Leichty, S. Prasad… and L. Zeise. (2017, January). Update to the California Communities Environmental Health Screening Tool CalEnviroScreen 3.0. Retrieved from OEHHA website: https://oehha.ca.gov/media/downloads/calenviroscreen/report/ces3report.pdf
|
||||||
|
|
|
@ -1,9 +1,8 @@
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
|
|
||||||
from data_pipeline.etl.base import ExtractTransformLoad
|
|
||||||
from data_pipeline.utils import get_module_logger
|
|
||||||
from data_pipeline.score import field_names
|
|
||||||
from data_pipeline.config import settings
|
from data_pipeline.config import settings
|
||||||
|
from data_pipeline.etl.base import ExtractTransformLoad
|
||||||
|
from data_pipeline.score import field_names
|
||||||
|
from data_pipeline.utils import get_module_logger
|
||||||
|
|
||||||
logger = get_module_logger(__name__)
|
logger = get_module_logger(__name__)
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue