Backend release branch to main (#1822)

* Create deploy_be_staging.yml (#1575)

* Imputing income using geographic neighbors (#1559)

Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here.

* Adding HOLC indicator (#1579)

Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category.

* Update backend for Puerto Rico (#1686)

* Update PR threshold count to 10

We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621

* Do not use linguistic iso for Puerto Rico

Closes 1350.

Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com>

* updating

* Do not drop Guam and USVI from ETL (#1681)

* Remove code that drops Guam and USVI from ETL

* Add back code for dropping rows by FIPS code

We may want this functionality, so let's keep it and just make the constant currently be an empty array.

Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com>

* Emma nechamkin/holc patch (#1742)

Removing HOLC calculation from score narwhal.

* updating ejscreen data, try two (#1747)

* Rescaling linguistic isolation  (#1750)

Rescales linguistic isolation to drop puerto rico

* adds UST indicator (#1786)

adds leaky underground storage tanks

* Changing LHE in tiles to a boolean (#1767)

also includes merging / clean up of the release

* added indoor plumbing to chas

* added indoor plumbing to score housing burden

* added indoor plumbing to score housing burden

* first run through

* Refactor DOE Energy Burden and COI to use YAML (#1796)

* added tribalId for Supplemental dataset (#1804)

* Setting zoom levels for tribal map (#1810)

* NRI dataset and initial score YAML configuration (#1534)

* update be staging gha

* NRI dataset and initial score YAML configuration

* checkpoint

* adding data checks for release branch

* passing tests

* adding INPUT_EXTRACTED_FILE_NAME to base class

* lint

* columns to keep and tests

* update be staging gha

* checkpoint

* update be staging gha

* NRI dataset and initial score YAML configuration

* checkpoint

* adding data checks for release branch

* passing tests

* adding INPUT_EXTRACTED_FILE_NAME to base class

* lint

* columns to keep and tests

* checkpoint

* PR Review

* renoving source url

* tests

* stop execution of ETL if there's a YAML schema issue

* update be staging gha

* adding source url as class var again

* clean up

* force cache bust

* gha cache bust

* dynamically set score vars from YAML

* docsctrings

* removing last updated year - optional reverse percentile

* passing tests

* sort order

* column ordening

* PR review

* class level vars

* Updating DatasetsConfig

* fix pylint errors

* moving metadata hint back to code

Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>

* Correct copy typo (#1809)

* Add basic test suite for COI (#1518)

* Update COI to use new yaml (#1518)

* Add tests for DOE energy budren (1518

* Add dataset config for energy budren (1518)

* Refactor ETL to use datasets.yml (#1518)

* Add fake GEOIDs to COI tests (#1518)

* Refactor _setup_etl_instance_and_run_extract to base (#1518)

For the three classes we've done so far, a generic
_setup_etl_instance_and_run_extract will work fine, for the moment we
can reuse the same setup method until we decide future classes need more
flexibility --- but they can also always subclass so...

* Add output-path tests (#1518)

* Update YAML to match constant (#1518)

* Don't blindly set float format (#1518)

* Add defaults for extract (#1518)

* Run YAML load on all subclasses (#1518)

* Update description fields (#1518)

* Update YAML per final format (#1518)

* Update fixture tract IDs (#1518)

* Update base class refactor (#1518)

Now that NRI is final I needed to make a small number of updates to my
refactored code.

* Remove old comment (#1518)

* Fix type signature and return (#1518)

* Update per code review (#1518)

Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com>

* Update etl_score_geo.py

Yikes! Fixing merge messup!

* Create deploy_be_staging.yml (#1575)

* Imputing income using geographic neighbors (#1559)

Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here.

* Adding HOLC indicator (#1579)

Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category.

* Update backend for Puerto Rico (#1686)

* Update PR threshold count to 10

We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621

* Do not use linguistic iso for Puerto Rico

Closes 1350.

Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com>

* updating

* Do not drop Guam and USVI from ETL (#1681)

* Remove code that drops Guam and USVI from ETL

* Add back code for dropping rows by FIPS code

We may want this functionality, so let's keep it and just make the constant currently be an empty array.

Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com>

* Emma nechamkin/holc patch (#1742)

Removing HOLC calculation from score narwhal.

* updating ejscreen data, try two (#1747)

* Rescaling linguistic isolation  (#1750)

Rescales linguistic isolation to drop puerto rico

* adds UST indicator (#1786)

adds leaky underground storage tanks

* Changing LHE in tiles to a boolean (#1767)

also includes merging / clean up of the release

* added indoor plumbing to chas

* added indoor plumbing to score housing burden

* added indoor plumbing to score housing burden

* first run through

* Refactor DOE Energy Burden and COI to use YAML (#1796)

* added tribalId for Supplemental dataset (#1804)

* Setting zoom levels for tribal map (#1810)

* NRI dataset and initial score YAML configuration (#1534)

* update be staging gha

* NRI dataset and initial score YAML configuration

* checkpoint

* adding data checks for release branch

* passing tests

* adding INPUT_EXTRACTED_FILE_NAME to base class

* lint

* columns to keep and tests

* update be staging gha

* checkpoint

* update be staging gha

* NRI dataset and initial score YAML configuration

* checkpoint

* adding data checks for release branch

* passing tests

* adding INPUT_EXTRACTED_FILE_NAME to base class

* lint

* columns to keep and tests

* checkpoint

* PR Review

* renoving source url

* tests

* stop execution of ETL if there's a YAML schema issue

* update be staging gha

* adding source url as class var again

* clean up

* force cache bust

* gha cache bust

* dynamically set score vars from YAML

* docsctrings

* removing last updated year - optional reverse percentile

* passing tests

* sort order

* column ordening

* PR review

* class level vars

* Updating DatasetsConfig

* fix pylint errors

* moving metadata hint back to code

Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>

* Correct copy typo (#1809)

* Add basic test suite for COI (#1518)

* Update COI to use new yaml (#1518)

* Add tests for DOE energy budren (1518

* Add dataset config for energy budren (1518)

* Refactor ETL to use datasets.yml (#1518)

* Add fake GEOIDs to COI tests (#1518)

* Refactor _setup_etl_instance_and_run_extract to base (#1518)

For the three classes we've done so far, a generic
_setup_etl_instance_and_run_extract will work fine, for the moment we
can reuse the same setup method until we decide future classes need more
flexibility --- but they can also always subclass so...

* Add output-path tests (#1518)

* Update YAML to match constant (#1518)

* Don't blindly set float format (#1518)

* Add defaults for extract (#1518)

* Run YAML load on all subclasses (#1518)

* Update description fields (#1518)

* Update YAML per final format (#1518)

* Update fixture tract IDs (#1518)

* Update base class refactor (#1518)

Now that NRI is final I needed to make a small number of updates to my
refactored code.

* Remove old comment (#1518)

* Fix type signature and return (#1518)

* Update per code review (#1518)

Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com>

* Update etl_score_geo.py

Yikes! Fixing merge messup!

* updated to fix linting errors (#1818)

Cleans and updates base branch

* Adding back MapComparison video

* Add FUDS ETL (#1817)

* Add spatial join method (#1871)

Since we'll need to figure out the tracts for a large number of points
in future tickets, add a utility to handle grabbing the tract geometries
and adding tract data to a point dataset.

* Add FUDS, also jupyter lab (#1871)

* Add YAML configs for FUDS (#1871)

* Allow input geoid to be optional (#1871)

* Add FUDS ETL, tests, test-datae noteobook (#1871)

This adds the ETL class for Formerly Used Defense Sites (FUDS). This is
different from most other ETLs since these FUDS are not provided by
tract, but instead by geographic point, so we need to assign FUDS to
tracts and then do calculations from there.

* Floats -> Ints, as I intended (#1871)

* Floats -> Ints, as I intended (#1871)

* Formatting fixes (#1871)

* Add test false positive GEOIDs (#1871)

* Add gdal binaries (#1871)

* Refactor pandas code to be more idiomatic (#1871)

Per Emma, the more pandas-y way of doing my counts is using np.where to
add the values i need, then groupby and size. It is definitely more
compact, and also I think more correct!

* Update configs per Emma suggestions (#1871)

* Type fixed! (#1871)

* Remove spurious import from vscode (#1871)

* Snapshot update after changing col name (#1871)

* Move up GDAL (#1871)

* Adjust geojson strategy (#1871)

* Try running census separately first (#1871)

* Fix import order (#1871)

* Cleanup cache strategy (#1871)

* Download census data from S3 instead of re-calculating (#1871)

* Clarify pandas code per Emma (#1871)

* Disable markdown check for link

* Adding DOT composite to travel score (#1820)

This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles.

* Adding first street foundation data (#1823)

Adding FSF flood and wildfire risk datasets to the score.

* first run -- adding NCLD data to the ETL, but not yet to the score

* Add abandoned mine lands data (#1824)

* Add notebook to generate test data (#1780)

* Add Abandoned Mine Land data (#1780)

Using a similar structure but simpler apporach compared to FUDs, add an
indicator for whether a tract has an abandonded mine.

* Adding some detail to dataset readmes

Just a thought!

* Apply feedback from revieiw (#1780)

* Fixup bad string that broke test (#1780)

* Update a string that I should have renamed (#1780)

* Reduce number of threads to reduce memory pressure (#1780)

* Try not running geo data (#1780)

* Run the high-memory sets separately (#1780)

* Actually deduplicate (#1780)

* Add flag for memory intensive ETLs (#1780)

* Document new flag for datasets (#1780)

* Add flag for new datasets fro rebase (#1780)

Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com>

* Adding NLCD data (#1826)

Adding NLCD's natural space indicator end to end to the score.

* Add donut hole calculation to score (#1828)

Adds adjacency index to the pipeline. Requires thorough QA

* Adding eamlis and fuds data to legacy pollution in score (#1832)

Update to add EAMLIS and FUDS data to score

* Update to use new FSF files (#1838)

backend is partially done!

* Quick fix to kitchen or plumbing indicator

Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick.

* Fast flag update (#1844)

Added additional flags for the front end based on our conversation in stand up this morning.

* Tiles fix (#1845)

Fixes score-geo and adds flags

* Update etl_score_geo.py

* Issue 1827: Add demographics to tiles and download files (#1833)

* Adding demographics for use in sidebar and download files

* Updates backend constants to N (#1854)

* updated to show T/F/null vs T/F for AML and FUDS (#1866)

* fix markdown

* just testing that the boolean is preserved on gha

* checking drop tracts works

* OOPS!

Old changes persisted

* adding a check to the agvalue calculation for nri

* updated with error messages

* updated error message

* tuple type

* Score tests (#1847)

* update Python version on README; tuple typing fix

* Alaska tribal points fix (#1821)

* Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777)

Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3.
- [Release notes](https://github.com/lepture/mistune/releases)
- [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst)
- [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3)

---
updated-dependencies:
- dependency-name: mistune
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* poetry update

* initial pass of score tests

* add threshold tests

* added ses threshold (not donut, not island)

* testing suite -- stopping for the day

* added test for lead proxy indicator

* Refactor score tests to make them less verbose and more direct (#1865)

* Cleanup tests slightly before refactor (#1846)

* Refactor score calculations tests

* Feedback from review

* Refactor output tests like calculatoin tests (#1846) (#1870)

* Reorganize files (#1846)

* Switch from lru_cache to fixture scorpes (#1846)

* Add tests for all factors (#1846)

* Mark smoketests and run as part of be deply (#1846)

* Update renamed var (#1846)

* Switch from named tuple to dataclass (#1846)

This is annoying, but pylint in python3.8 was crashing parsing the named
tuple. We weren't using any namedtuple-specific features, so I made the
type a dataclass just to get pylint to behave.

* Add default timout to requests (#1846)

* Fix type (#1846)

* Fix merge mistake on poetry.lock (#1846)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com>
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>

* just testing that the boolean is preserved on gha (#1867)

* updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character.

* Adding tests to ensure proper calculations (#1871)

* just testing that the boolean is preserved on gha
* checking drop tracts works
* adding a check to the agvalue calculation for nri
* updated with error messages

* tribal tiles fix (#1874)

* Alaska tribal points fix (#1821)

* tribal tiles fix

* disabling child opportunity

* lint

* removing COI

* removing commented out code

* Pipeline tile tests (#1864)

* temp update

* updating with fips check

* adding check on pfs

* updating with pfs test

* Update test_tiles_smoketests.py

* Fix lint errors (#1848)

* Add column names test (#1848)

* Mark tests as smoketests (#1848)

* Move to other score-related tests (#1848)

* Recast Total threshold criteria exceeded to int (#1848)

In writing tests to verify the output of the tiles csv matches the final
score CSV, I noticed TC/Total threshold criteria exceeded was getting
cast from an int64 to a float64 in the process of PostScoreETL. I
tracked it down to the line where we merge the score dataframe with
constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the
national census CSV that don't exist in the score, so those ended up
with a Total threshhold count of np.nan, which is a float, and thereby
cast those columns to float. For the moment I just cast it back.

* No need for low memeory (#1848)

* Add additional tests of tiles.csv (#1848)

* Drop pre-2010 rows before computing score (#1848)

Note this is probably NOT the optimal place for this change; it might
make more sense for each source to filter its own tracts down to the
acceptable tract list. However, that would be a pretty invasive change,
where this is central and plenty of other things are happening in score
transform that could be moved to sources, so for today, here's where the
change will live.

* Fix typo (#1848)

* Switch from filter to inner join (#1848)

* Remove no-op lines from tiles (#1848)

* Apply feedback from review, linter (#1848)

* Check the values oeverything in the frame (#1848)

* Refactor checker class (#1848)

* Add test for state names (#1848)

* cleanup from reviewing my own code (#1848)

* Fix lint error (#1858)

* Apply Emma's feedback from review (#1848)

* Remove refs to national_df (#1848)

* Account for new, fake nullable bools in tiles (#1848)

To handle a geojson limitation, Emma converted some nullable boolean
colunms to float64 in the tiles export with the values {0.0, 1.0, nan},
giving us the same expressiveness. Sadly, this broke my assumption that
all columns between the score and tiles csvs would have the same dtypes,
so I need to account for these new, fake bools in my test.

* Use equals instead of my worse version (#1848)

* Missed a spot where we called _create_score_data (#1848)

* Update per safety (#1848)

Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>

* Add tests to make sure each source makes it to the score correctly (#1878)

* Remove unused persistent poverty from score (#1835)

* Test a few datasets for overlap in the final score (#1835)

* Add remaining data sources (#1853)

* Apply code-review feedback (#1835)

* Rearrange a little for readabililty (#1835)

* Add tract test (#1835)

* Add test for score values (#1835)

* Check for unmatched source tracts (#1835)

* Cleanup numeric code to plaintext (#1835)

* Make import more obvious (#1835)

* Updating traffic barriers to include low pop threshold (#1889)

Changing the traffic barriers to only be included for places with recorded population

* Remove no land tracts from map (#1894)

remove from map

* Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887)

* Fixing missing states and adding tests for states to all classes

* Removing low pop tracts from FEMA population loss (#1898)

dropping 0 population from FEMA

* 1831 Follow up (#1902)

This code causes no functional change to the code. It does two things:

1. Uses difference instead of - to improve code style for working with sets.

2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False.

* Add tests for all non-census sources (#1899)

* Refactor CDC life-expectancy (1554)

* Update to new tract list (#1554)

* Adjust for tests (#1848)

* Add tests for cdc_places (#1848)

* Add EJScreen tests (#1848)

* Add tests for HUD housing (#1848)

* Add tests for GeoCorr (#1848)

* Add persistent poverty tests (#1848)

* Update for sources without zips, for new validation (#1848)

* Update tests for new multi-CSV but (#1848)

Lucas updated the CDC life expectancy data to handle a bug where two
states are missing from the US Overall download. Since virtually none of
our other ETL classes download multiple CSVs directly like this, it
required a pretty invasive new mocking strategy.

* Add basic tests for nature deprived (#1848)

* Add wildfire tests (#1848)

* Add flood risk tests (#1848)

* Add DOT travel tests (#1848)

* Add historic redlining tests (#1848)

* Add tests for ME and WI (#1848)

* Update now that validation exists (#1848)

* Adjust for validation (#1848)

* Add health insurance back to cdc places (#1848)

Ooops

* Update tests with new field (#1848)

* Test for blank tract removal (#1848)

* Add tracts for clipping behavior

* Test clipping and zfill behavior (#1848)

* Fix bad test assumption (#1848)

* Simplify class, add test for tract padding (#1848)

* Fix percentage inversion, update tests (#1848)

Looking through the transformations, I noticed that we were subtracting
a percentage that is usually between 0-100 from 1 instead of 100, and so
were endind up with some surprising results. Confirmed with lucasmbrown-usds

* Add note about first street data (#1848)

* Issue 1900: Tribal overlap with Census tracts (#1903)

* working notebook

* updating notebook

* wip

* fixing broken tests

* adding tribal overlap files

* WIP

* WIP

* WIP, calculated count and names

* working

* partial cleanup

* partial cleanup

* updating field names

* fixing bug

* removing pyogrio

* removing unused imports

* updating test fixtures to be more realistic

* cleaning up notebook

* fixing black

* fixing flake8 errors

* adding tox instructions

* updating etl_score

* suppressing warning

* Use projected CRSes, ignore geom types (#1900)

I looked into this a bit, and in general the geometry type mismatch
changes very little about the calculation; we have a mix of
multipolygons and polygons. The fastest thing to do is just not keep
geom type; I did some runs with it set to both True and False, and
they're the same within 9 digits of precision. Logically we just want to
overlaps, regardless of how the actual geometries are encoded between
the frames, so we can in this case ignore the geom types and feel OKAY.

I also moved to projected CRSes, since we are actually trying to do area
calculations and so like, we should. Again, the change is small in
magnitude but logically more sound.

* Readd CDC dataset config (#1900)

* adding comments to fips code

* delete unnecessary loggers

Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>

* Improve score test documentation based on Lucas's feedback (#1835) (#1914)

* Better document base on Lucas's feedback (#1835)

* Fix typo (#1835)

* Add test to verify GEOJSON matches tiles (#1835)

* Remove NOOP line (#1835)

* Move GEOJSON generation up for new smoketest (#1835)

* Fixup code format (#1835)

* Update readme for new somketest (#1835)

* Cleanup source tests (#1912)

* Move test to base for broader coverage (#1848)

* Remove duplicate line (#1848)

* FUDS needed an extra mock (#1848)

* Add tribal count notebook (#1917) (#1919)

* Add tribal count notebook (#1917)

* test without caching

* added comment

Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>

* Add tribal overlap to downloads (#1907)

* Add tribal data to downloads (#1904)

* Update test pickle with current cols (#1904)

* Remove text of tribe names from GeoJSON (#1904)

* Update test data (#1904)

* Add tribal overlap to smoketests (#1904)

* Issue 1910: Do not impute income for 0 population tracts (#1918)

* should be working, has unnecessary loggers

* removing loggers and cleaning up

* updating ejscreen tests

* adding tests and responding to PR feedback

* fixing broken smoke test

* delete smoketest docs

* updating click

* updating click

* Bump just jupyterlab (#1930)

* Fixing link checker (#1929)

* Update deps safety says are vulnerable (#1937) (#1938)

Co-authored-by: matt bowen <matt@mattbowen.net>

* Add demos for island areas (#1932)

* Backfill population in island areas (#1882)

* Update smoketest to account for backfills (#1882)

As I wrote in the commend:
We backfill island areas with data from the 2010 census, so if THOSE tracts
have data beyond the data source, that's to be expected and is fine to pass.
If some other state or territory does though, this should fail

This ends up being a nice way of documenting that behavior i guess!

* Fixup lint issues (#1882)

* Add in race demos to 2010 census pull (#1851)

* Add backfill data to score (#1851)

* Change column name (#1851)

* Fill demos after the score (#1851)

* Add income back, adjust test (#1882)

* Apply code-review feedback (#1851)

* Add test for island area backfill (#1851)

* Fix bad rename (#1851)

* Reorder download fields, add plumbing back (#1942)

* Add back lack of plumbing fields (#1920)

* Reorder fields for excel (#1921)

* Reorder excel fields (#1921)

* Fix formating, lint errors, pickes (#1921)

* Add missing plumbing col, fix order again (#1921)

* Update that pickle (#1921)

* refactoring tribal (#1960)

* updated with scoring comparison

* updated for narhwal -- leaving commented code in for now

* pydantic upgrade

* produce a string for the front end to ingest (#1963)

* wip

* i believe this works -- let's see the pipeline

* updated fixtures

* Adding ADJLI_ET (#1976)

* updated tile data

* ensuring adjli_et in

* Add back income percentile (#1977)

* Add missing field to download (#1964)

* Remove pydantic since it's unused (#1964)

* Add percentile to CSV (#1964)

* Update downloadable pickle (#1964)

* Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962)

* Configure and run `black` and other pre-commit hooks

Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>

* Removing fixed python version for black (#1985)

* Fixup TA_COUNT and TA_PERC (#1991)

* Change TA_PERC, change TA_COUNT (#1988, #1989)

- Make TA_PERC_STR back into a nullable float following the rules
  requestsed in #1989
- Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS
  that we can fill in later.

* Fix typo comment (#1988)

* Issue 1992: Do not impute income for null population tracts (#1993)

* Hotfix for DOT data source DNS issue (#1999)

* Make tribal overlap set score N (#2004)

* Add "Is a Tribal DAC" field (#1998)

* Add tribal DACs to score N final (#1998)

* Add new fields to downloads (#1998)

* Make a int a float (#1998)

* Update field names, apply feedback (#1998)

* Add assertions around codebook (#2014)

* Add assertion around codebook (#1505)

* Assert csv and excel have same cols (#1505)

* Remove suffixes from tribal lands (#1974) (#2008)

* Data source location (#2015)

* data source location

* toml

* cdc_places

* cdc_svi_index

* url updates

* child oppy and dot travel

* up to hud_recap

* completed ticket

* cache bust

* hud_recap

* us_army_fuds

* Remove vars the frontend doesn't use (#2020) (#2022)

I did a pretty rough and simple analysis of the variables we put in the
tiles and grepped the frontend code to see if (1) they're ever accessed
and (2) if they're used, even if they're read once. I removed everything
I noticed was not accessed.

* Disable file size limits on tiles (#2031)

* Disable file size limits on tiles

* Remove print debugs

I know.

* Update file name pattern (#2037) (#2038)

* Update file name pattern (#2037)

* Remove ETL from generation (2037)

I looked more carefully, and this ETL step isn't used in the score, so
there's no need to run it every time. Per previous steps, I removed it
from constants so the code is there it won't run by default.

* Round ALL the float fields for the tiles (#2040)

* Round ALL the float fields for the tiles (#2033)

* Floor in a simpler way (#2033)

Emma pointed out that all teh stuff we're doing in floor_series is
probably unnecessary for this case, so just use the built-in floor.

* Update pickle I missed (#2033)

* Clean commit of just aggregate burden notebook (#1819)

added a burden notebook

* Update the dockerfile (#2045)

* Update so the image builds (#2026)

* Fix bad dict (2026)

* Rename census tract field in downloads (#2068)

* Change tract ID field name (2060)

* Update lockfile (#2061)

* Bump safety, jupyter, wheel (#2061)

* DOn't depend directly on wheel (2061)

* Bring narwhal reqs in line with main

* Update tribal area counts (#2071)

* Rename tribal area field (2062)

* Add missing file (#2062)

* Add checks to create version (#2047) (#2052)

* Fix failing safety (#2114)

* Ignore vuln that doesn't affect us 2113

https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and
there's no fix in py (which is maintenance mode). From my analysis, that
CVE cannot hurt us (famous last words), so we'll ignore the vuln for
now.

* 2113 Update our gdal ppa

* that didn't work (2113)

* Don't add the PPA, the package exists (#2113)

* Fix type (#2113)

* Force an update of wheel 2113

* Also remove PPA line from create-score-versions

* Drop 3.8 because of wheel 2113

* Put back 3.8, use newer actions

* Try another way of upgrading wheel 2113

* Upgrade wheel in tox too 2113

* Typo fix 2113

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com>
Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov>
Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com>
Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov>
Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
Co-authored-by: matt bowen <matt@mattbowen.net>
This commit is contained in:
Vim 2022-12-01 18:50:54 -08:00 committed by GitHub
commit b97e60bfbb
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
285 changed files with 20485 additions and 3880 deletions

View file

@ -35,7 +35,6 @@ datasets:
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- short_name: "ex_ag_loss"
df_field_name: "EXPECTED_AGRICULTURE_LOSS_RATE_FIELD_NAME"
long_name: "Expected agricultural loss rate (Natural Hazards Risk Index)"
@ -54,7 +53,6 @@ datasets:
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- short_name: "ex_bldg_loss"
df_field_name: "EXPECTED_BUILDING_LOSS_RATE_FIELD_NAME"
long_name: "Expected building loss rate (Natural Hazards Risk Index)"
@ -72,8 +70,262 @@ datasets:
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- short_name: "has_ag_val"
df_field_name: "CONTAINS_AGRIVALUE"
long_name: "Contains agricultural value"
field_type: bool
- long_name: "Child Opportunity Index 2.0 database"
short_name: "coi"
module_name: "child_opportunity_index"
input_geoid_tract_field_name: "geoid"
load_fields:
- short_name: "he_heat"
df_field_name: "EXTREME_HEAT_FIELD"
long_name: "Summer days above 90F"
field_type: float
include_in_downloadable_files: true
include_in_tiles: true
- short_name: "he_food"
long_name: "Percent low access to healthy food"
df_field_name: "HEALTHY_FOOD_FIELD"
field_type: float
include_in_downloadable_files: true
include_in_tiles: true
- short_name: "he_green"
long_name: "Percent impenetrable surface areas"
df_field_name: "IMPENETRABLE_SURFACES_FIELD"
field_type: float
include_in_downloadable_files: true
include_in_tiles: true
- short_name: "ed_reading"
df_field_name: "READING_FIELD"
long_name: "Third grade reading proficiency"
field_type: float
include_in_downloadable_files: true
include_in_tiles: true
- long_name: "Low-Income Energy Affordabililty Data"
short_name: "LEAD"
module_name: "doe_energy_burden"
input_geoid_tract_field_name: "FIP"
load_fields:
- short_name: "EBP_PFS"
df_field_name: "REVISED_ENERGY_BURDEN_FIELD_NAME"
long_name: "Energy burden"
field_type: float
include_in_downloadable_files: true
include_in_tiles: true
- long_name: "Formerly Used Defense Sites"
short_name: "FUDS"
module_name: "us_army_fuds"
load_fields:
- short_name: "fuds_count"
df_field_name: "ELIGIBLE_FUDS_COUNT_FIELD_NAME"
long_name: "Count of eligible Formerly Used Defense Site (FUDS) properties centroids"
description_short:
"The number of FUDS marked as Eligible and Has Project in the tract."
field_type: int64
include_in_tiles: false
include_in_downloadable_files: false
- short_name: "not_fuds_ct"
df_field_name: "INELIGIBLE_FUDS_COUNT_FIELD_NAME"
long_name: "Count of ineligible Formerly Used Defense Site (FUDS) properties centroids"
description_short:
"The number of FUDS marked as Ineligible or Project in the tract."
field_type: int64
include_in_tiles: false
include_in_downloadable_files: false
- short_name: "has_fuds"
df_field_name: "ELIGIBLE_FUDS_BINARY_FIELD_NAME"
long_name: "Is there at least one Formerly Used Defense Site (FUDS) in the tract?"
description_short:
"Whether the tract has a FUDS"
field_type: bool
include_in_tiles: false
include_in_downloadable_files: false
- long_name: "Abandoned Mine Land Inventory System"
short_name: "eAMLIS"
module_name: "eamlis"
load_fields:
- short_name: "has_aml"
df_field_name: "AML_BOOLEAN"
long_name: "Is there at least one abandoned mine in this census tract?"
description_short:
"Whether the tract has an abandoned mine"
field_type: bool
include_in_tiles: true
include_in_downloadable_files: true
- long_name: "Example ETL"
short_name: "Example"
module_name: "example_dataset"
input_geoid_tract_field_name: "GEOID10_TRACT"
load_fields:
- short_name: "EXAMPLE_FIELD"
df_field_name: "Input Field 1"
long_name: "Example Field 1"
field_type: float
include_in_tiles: true
include_in_downloadable_files: true
- long_name: "First Street Foundation Flood Risk"
short_name: "FSF Flood Risk"
module_name: fsf_flood_risk
input_geoid_tract_field_name: "GEOID"
load_fields:
- short_name: "flood_eligible_properties"
df_field_name: "COUNT_PROPERTIES"
long_name: "Count of properties eligible for flood risk calculation within tract (floor of 250)"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "flood_risk_properties_today"
df_field_name: "PROPERTIES_AT_RISK_FROM_FLOODING_TODAY"
long_name: "Count of properties at risk of flood today"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "flood_risk_properties_30yrs"
df_field_name: "PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS"
long_name: "Count of properties at risk of flood in 30 years"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "flood_risk_share_today"
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_TODAY"
long_name: "Share of properties at risk of flood today"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: true
- short_name: "flood_risk_share_30yrs"
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FLOODING_IN_30_YEARS"
long_name: "Share of properties at risk of flood in 30 years"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: true
- long_name: "First Street Foundation Wildfire Risk"
short_name: "FSF Wildfire Risk"
module_name: fsf_wildfire_risk
input_geoid_tract_field_name: "GEOID"
load_fields:
- short_name: "fire_eligible_properties"
df_field_name: "COUNT_PROPERTIES"
long_name: "Count of properties eligible for wildfire risk calculation within tract (floor of 250)"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "fire_risk_properties_today"
df_field_name: "PROPERTIES_AT_RISK_FROM_FIRE_TODAY"
long_name: "Count of properties at risk of wildfire today"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "fire_risk_properties_30yrs"
df_field_name: "PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS"
long_name: "Count of properties at risk of wildfire in 30 years"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
- short_name: "fire_risk_share_today"
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_TODAY"
long_name: "Share of properties at risk of fire today"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: true
- short_name: "fire_risk_share_30yrs"
df_field_name: "SHARE_OF_PROPERTIES_AT_RISK_FROM_FIRE_IN_30_YEARS"
long_name: "Share of properties at risk of fire in 30 years"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: true
- long_name: "DOT Travel Disadvantage Index"
short_name: "DOT"
module_name: "travel_composite"
input_geoid_tract_field_name: "GEOID10_TRACT"
load_fields:
- short_name: "travel_burden"
df_field_name: "TRAVEL_BURDEN_FIELD_NAME"
long_name: "DOT Travel Barriers Score"
field_type: float
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- long_name: "National Land Cover Database (NLCD) Lack of Green Space / Nature-Deprived Communities dataset, as compiled by TPL"
short_name: "nlcd_nature_deprived"
module_name: "nlcd_nature_deprived"
input_geoid_tract_field_name: "GEOID10_TRACT"
load_fields:
- short_name: "ncld_eligible"
df_field_name: "ELIGIBLE_FOR_NATURE_DEPRIVED_FIELD_NAME"
long_name: "Does the tract have at least 35 acres in it?"
field_type: bool
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: false
- short_name: "percent_impervious"
df_field_name: "TRACT_PERCENT_IMPERVIOUS_FIELD_NAME"
long_name: "Share of the tract's land area that is covered by impervious surface as a percent"
field_type: percentage
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- short_name: "percent_nonnatural"
df_field_name: "TRACT_PERCENT_NON_NATURAL_FIELD_NAME"
long_name: "Share of the tract's land area that is covered by impervious surface or cropland as a percent"
field_type: percentage
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- short_name: "percent_cropland"
df_field_name: "TRACT_PERCENT_CROPLAND_FIELD_NAME"
long_name: "Share of the tract's land area that is covered by cropland as a percent"
field_type: percentage
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: true
- long_name: "Overlap between Census tract boundaries and Tribal area boundaries."
short_name: "tribal_overlap"
module_name: "tribal_overlap"
input_geoid_tract_field_name: "GEOID10_TRACT"
load_fields:
- short_name: "tribal_count"
df_field_name: "COUNT_OF_TRIBAL_AREAS_IN_TRACT"
long_name: "Number of Tribal areas within Census tract"
field_type: int64
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: false
- short_name: "tribal_percent"
df_field_name: "PERCENT_OF_TRIBAL_AREA_IN_TRACT"
long_name: "Percent of the Census tract that is within Tribal areas"
field_type: float
include_in_tiles: true
include_in_downloadable_files: true
create_percentile: false
number_of_decimals_in_output: 6
- short_name: "tribal_names"
df_field_name: "NAMES_OF_TRIBAL_AREAS_IN_TRACT"
long_name: "Names of Tribal areas within Census tract"
field_type: string
include_in_tiles: true
include_in_downloadable_files: true
- long_name: "CDC Life Expeectancy"
short_name: "cdc_life_expectancy"
module_name: "cdc_life_expectancy"
input_geoid_tract_field_name: "Tract ID"
load_fields:
- short_name: "LLEF"
df_field_name: "LIFE_EXPECTANCY_FIELD_NAME"
long_name: "Life expectancy (years)"
field_type: float
include_in_tiles: false
include_in_downloadable_files: true
create_percentile: false
create_reverse_percentile: true

View file

@ -2,9 +2,11 @@ import os
from pathlib import Path
from data_pipeline.config import settings
from data_pipeline.score import field_names
## note: to keep map porting "right" fields, keeping descriptors the same.
# Base Paths
DATA_PATH = Path(settings.APP_ROOT) / "data"
TMP_PATH = DATA_PATH / "tmp"
@ -115,7 +117,7 @@ ISLAND_AREAS_EXPLANATION = (
CENSUS_COUNTIES_COLUMNS = ["USPS", "GEOID", "NAME"]
# Drop FIPS codes from map
DROP_FIPS_CODES = ["66", "78"]
DROP_FIPS_CODES = []
# Drop FIPS codes from incrementing
DROP_FIPS_FROM_NON_WTD_THRESHOLDS = "72"
@ -138,7 +140,7 @@ TILES_ROUND_NUM_DECIMALS = 2
# Controlling Tile user experience columns
THRESHOLD_COUNT_TO_SHOW_FIELD_NAME = "THRHLD"
TILES_ISLAND_AREAS_THRESHOLD_COUNT = 3
TILES_PUERTO_RICO_THRESHOLD_COUNT = 4
TILES_PUERTO_RICO_THRESHOLD_COUNT = 10
TILES_NATION_THRESHOLD_COUNT = 21
# Note that the FIPS code is a string
@ -146,6 +148,58 @@ TILES_NATION_THRESHOLD_COUNT = 21
# 60: American Samoa, 66: Guam, 69: N. Mariana Islands, 78: US Virgin Islands
TILES_ISLAND_AREA_FIPS_CODES = ["60", "66", "69", "78"]
TILES_PUERTO_RICO_FIPS_CODE = ["72"]
TILES_ALASKA_AND_HAWAII_FIPS_CODE = ["02", "15"]
TILES_CONTINENTAL_US_FIPS_CODE = [
"01",
"04",
"05",
"06",
"08",
"09",
"10",
"11",
"12",
"13",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"40",
"41",
"42",
"44",
"45",
"46",
"47",
"48",
"49",
"50",
"51",
"53",
"54",
"55",
"56",
]
# Constant to reflect UI Experience version
# "Nation" referring to 50 states and DC is from Census
@ -189,16 +243,17 @@ TILES_SCORE_COLUMNS = {
+ field_names.PERCENTILE_FIELD_SUFFIX: "LIF_PFS",
field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "LMI_PFS",
field_names.MEDIAN_HOUSE_VALUE_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "MHVF_PFS",
field_names.PM25_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "PM25F_PFS",
field_names.HIGH_SCHOOL_ED_FIELD: "HSEF",
field_names.POVERTY_LESS_THAN_100_FPL_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "P100_PFS",
field_names.POVERTY_LESS_THAN_200_FPL_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "P200_PFS",
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "P200_I_PFS",
field_names.FPL_200_SERIES_IMPUTED_AND_ADJUSTED_DONUTS: "AJDLI_ET",
field_names.LEAD_PAINT_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "LPF_PFS",
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "KP_PFS",
field_names.NPL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "NPL_PFS",
field_names.RMP_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "RMP_PFS",
field_names.TSDF_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "TSDF_PFS",
@ -208,37 +263,24 @@ TILES_SCORE_COLUMNS = {
+ field_names.PERCENTILE_FIELD_SUFFIX: "UF_PFS",
field_names.WASTEWATER_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "WF_PFS",
field_names.M_WATER: "M_WTR",
field_names.M_WORKFORCE: "M_WKFC",
field_names.M_CLIMATE: "M_CLT",
field_names.M_ENERGY: "M_ENY",
field_names.M_TRANSPORTATION: "M_TRN",
field_names.M_HOUSING: "M_HSG",
field_names.M_POLLUTION: "M_PLN",
field_names.M_HEALTH: "M_HLTH",
field_names.SCORE_M_COMMUNITIES: "SM_C",
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX: "SM_PFS",
field_names.EXPECTED_POPULATION_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EPLRLI",
field_names.EXPECTED_AGRICULTURE_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EALRLI",
field_names.EXPECTED_BUILDING_LOSS_RATE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EBLRLI",
field_names.PM25_EXPOSURE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "PM25LI",
field_names.ENERGY_BURDEN_LOW_INCOME_LOW_HIGHER_ED_FIELD: "EBLI",
field_names.DIESEL_PARTICULATE_MATTER_LOW_INCOME_LOW_HIGHER_ED_FIELD: "DPMLI",
field_names.TRAFFIC_PROXIMITY_LOW_INCOME_LOW_HIGHER_ED_FIELD: "TPLI",
field_names.LEAD_PAINT_MEDIAN_HOUSE_VALUE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "LPMHVLI",
field_names.HOUSING_BURDEN_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HBLI",
field_names.RMP_LOW_INCOME_LOW_HIGHER_ED_FIELD: "RMPLI",
field_names.SUPERFUND_LOW_INCOME_LOW_HIGHER_ED_FIELD: "SFLI",
field_names.HAZARDOUS_WASTE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HWLI",
field_names.WASTEWATER_DISCHARGE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "WDLI",
field_names.DIABETES_LOW_INCOME_LOW_HIGHER_ED_FIELD: "DLI",
field_names.ASTHMA_LOW_INCOME_LOW_HIGHER_ED_FIELD: "ALI",
field_names.HEART_DISEASE_LOW_INCOME_LOW_HIGHER_ED_FIELD: "HDLI",
field_names.LOW_LIFE_EXPECTANCY_LOW_INCOME_LOW_HIGHER_ED_FIELD: "LLELI",
field_names.LINGUISTIC_ISOLATION_LOW_HS_LOW_HIGHER_ED_FIELD: "LILHSE",
field_names.POVERTY_LOW_HS_LOW_HIGHER_ED_FIELD: "PLHSE",
field_names.LOW_MEDIAN_INCOME_LOW_HS_LOW_HIGHER_ED_FIELD: "LMILHSE",
field_names.UNEMPLOYMENT_LOW_HS_LOW_HIGHER_ED_FIELD: "ULHSE",
field_names.UST_FIELD + field_names.PERCENTILE_FIELD_SUFFIX: "UST_PFS",
field_names.N_WATER: "N_WTR",
field_names.N_WORKFORCE: "N_WKFC",
field_names.N_CLIMATE: "N_CLT",
field_names.N_ENERGY: "N_ENY",
field_names.N_TRANSPORTATION: "N_TRN",
field_names.N_HOUSING: "N_HSG",
field_names.N_POLLUTION: "N_PLN",
field_names.N_HEALTH: "N_HLTH",
# temporarily update this so that it's the Narwhal score that gets visualized on the map
# The NEW final score value INCLUDES the adjacency index.
field_names.FINAL_SCORE_N_BOOLEAN: "SN_C",
field_names.IS_TRIBAL_DAC: "SN_T",
field_names.DIABETES_LOW_INCOME_FIELD: "DLI",
field_names.ASTHMA_LOW_INCOME_FIELD: "ALI",
field_names.POVERTY_LOW_HS_EDUCATION_FIELD: "PLHSE",
field_names.LOW_MEDIAN_INCOME_LOW_HS_EDUCATION_FIELD: "LMILHSE",
field_names.UNEMPLOYMENT_LOW_HS_EDUCATION_FIELD: "ULHSE",
# new booleans only for the environmental factors
field_names.EXPECTED_POPULATION_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EPL_ET",
field_names.EXPECTED_AGRICULTURAL_LOSS_EXCEEDS_PCTILE_THRESHOLD: "EAL_ET",
@ -248,11 +290,14 @@ TILES_SCORE_COLUMNS = {
field_names.DIESEL_EXCEEDS_PCTILE_THRESHOLD: "DS_ET",
field_names.TRAFFIC_PROXIMITY_PCTILE_THRESHOLD: "TP_ET",
field_names.LEAD_PAINT_PROXY_PCTILE_THRESHOLD: "LPP_ET",
field_names.HISTORIC_REDLINING_SCORE_EXCEEDED: "HRS_ET",
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_PCTILE_THRESHOLD: "KP_ET",
field_names.HOUSING_BURDEN_PCTILE_THRESHOLD: "HB_ET",
field_names.RMP_PCTILE_THRESHOLD: "RMP_ET",
field_names.NPL_PCTILE_THRESHOLD: "NPL_ET",
field_names.TSDF_PCTILE_THRESHOLD: "TSDF_ET",
field_names.WASTEWATER_PCTILE_THRESHOLD: "WD_ET",
field_names.UST_PCTILE_THRESHOLD: "UST_ET",
field_names.DIABETES_PCTILE_THRESHOLD: "DB_ET",
field_names.ASTHMA_PCTILE_THRESHOLD: "A_ET",
field_names.HEART_DISEASE_PCTILE_THRESHOLD: "HD_ET",
@ -278,79 +323,56 @@ TILES_SCORE_COLUMNS = {
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "IAULHSE_PFS",
field_names.LOW_HS_EDUCATION_LOW_HIGHER_ED_FIELD: "LHE",
field_names.LOW_HS_EDUCATION_FIELD: "LHE",
field_names.ISLAND_AREAS_LOW_HS_EDUCATION_FIELD: "IALHE",
# Percentage of HS Degree completion for Islands
field_names.CENSUS_DECENNIAL_HIGH_SCHOOL_ED_FIELD_2009: "IAHSEF",
field_names.COLLEGE_ATTENDANCE_FIELD: "CA",
field_names.COLLEGE_NON_ATTENDANCE_FIELD: "NCA",
# This is logically equivalent to "non-college greater than 80%"
field_names.COLLEGE_ATTENDANCE_LESS_THAN_20_FIELD: "CA_LT20",
# Booleans for the front end about the types of thresholds exceeded
field_names.CLIMATE_THRESHOLD_EXCEEDED: "M_CLT_EOMI",
field_names.ENERGY_THRESHOLD_EXCEEDED: "M_ENY_EOMI",
field_names.TRAFFIC_THRESHOLD_EXCEEDED: "M_TRN_EOMI",
field_names.HOUSING_THREHSOLD_EXCEEDED: "M_HSG_EOMI",
field_names.POLLUTION_THRESHOLD_EXCEEDED: "M_PLN_EOMI",
field_names.WATER_THRESHOLD_EXCEEDED: "M_WTR_EOMI",
field_names.HEALTH_THRESHOLD_EXCEEDED: "M_HLTH_EOMI",
field_names.WORKFORCE_THRESHOLD_EXCEEDED: "M_WKFC_EOMI",
field_names.CLIMATE_THRESHOLD_EXCEEDED: "N_CLT_EOMI",
field_names.ENERGY_THRESHOLD_EXCEEDED: "N_ENY_EOMI",
field_names.TRAFFIC_THRESHOLD_EXCEEDED: "N_TRN_EOMI",
field_names.HOUSING_THREHSOLD_EXCEEDED: "N_HSG_EOMI",
field_names.POLLUTION_THRESHOLD_EXCEEDED: "N_PLN_EOMI",
field_names.WATER_THRESHOLD_EXCEEDED: "N_WTR_EOMI",
field_names.HEALTH_THRESHOLD_EXCEEDED: "N_HLTH_EOMI",
field_names.WORKFORCE_THRESHOLD_EXCEEDED: "N_WKFC_EOMI",
# These are the booleans for socioeconomic indicators
## this measures low income boolean
field_names.FPL_200_SERIES: "FPL200S",
## Low high school and low higher ed for t&wd
field_names.WORKFORCE_SOCIO_INDICATORS_EXCEEDED: "M_WKFC_EBSI",
## FPL 200 and low higher ed for all others
field_names.FPL_200_AND_COLLEGE_ATTENDANCE_SERIES: "M_EBSI",
field_names.FPL_200_SERIES_IMPUTED_AND_ADJUSTED: "FPL200S",
## Low high school for t&wd
field_names.WORKFORCE_SOCIO_INDICATORS_EXCEEDED: "N_WKFC_EBSI",
field_names.DOT_BURDEN_PCTILE_THRESHOLD: "TD_ET",
field_names.DOT_TRAVEL_BURDEN_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "TD_PFS",
field_names.FUTURE_FLOOD_RISK_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "FLD_PFS",
field_names.FUTURE_WILDFIRE_RISK_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX: "WFR_PFS",
field_names.HIGH_FUTURE_FLOOD_RISK_FIELD: "FLD_ET",
field_names.HIGH_FUTURE_WILDFIRE_RISK_FIELD: "WFR_ET",
field_names.ADJACENT_TRACT_SCORE_ABOVE_DONUT_THRESHOLD: "ADJ_ET",
field_names.TRACT_PERCENT_NON_NATURAL_FIELD_NAME
+ field_names.PERCENTILE_FIELD_SUFFIX: "IS_PFS",
field_names.NON_NATURAL_LOW_INCOME_FIELD_NAME: "IS_ET",
field_names.AML_BOOLEAN_FILLED_IN: "AML_ET",
field_names.ELIGIBLE_FUDS_BINARY_FIELD_NAME: "FUDS_RAW",
field_names.ELIGIBLE_FUDS_FILLED_IN_FIELD_NAME: "FUDS_ET",
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME: "IMP_FLG",
## FPL 200 and low higher ed for all others should no longer be M_EBSI, but rather
## FPL_200 (there is no higher ed in narwhal)
field_names.PERCENT_BLACK_FIELD_NAME: "DM_B",
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME: "DM_AI",
field_names.PERCENT_ASIAN_FIELD_NAME: "DM_A",
field_names.PERCENT_HAWAIIAN_FIELD_NAME: "DM_HI",
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME: "DM_T",
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME: "DM_W",
field_names.PERCENT_HISPANIC_FIELD_NAME: "DM_H",
field_names.PERCENT_OTHER_RACE_FIELD_NAME: "DM_O",
field_names.PERCENT_AGE_UNDER_10: "AGE_10",
field_names.PERCENT_AGE_10_TO_64: "AGE_MIDDLE",
field_names.PERCENT_AGE_OVER_64: "AGE_OLD",
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK: "TA_COUNT_AK",
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS: "TA_COUNT_C",
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT: "TA_PERC",
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY: "TA_PERC_FE",
}
# columns to round floats to 2 decimals
# TODO refactor to use much smaller subset of fields we DON'T want to round
TILES_SCORE_FLOAT_COLUMNS = [
field_names.DIABETES_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.ASTHMA_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.HEART_DISEASE_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.DIESEL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.ENERGY_BURDEN_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.EXPECTED_BUILDING_LOSS_RATE_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.EXPECTED_POPULATION_LOSS_RATE_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.HOUSING_BURDEN_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.LOW_LIFE_EXPECTANCY_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.LINGUISTIC_ISO_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.MEDIAN_HOUSE_VALUE_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.PM25_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.POVERTY_LESS_THAN_100_FPL_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.POVERTY_LESS_THAN_200_FPL_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.LEAD_PAINT_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.NPL_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.RMP_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.TSDF_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.TRAFFIC_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.UNEMPLOYMENT_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
# Percentiles for Island areas' workforce columns
# To be clear: the island areas pull from 2009 census. PR does not.
field_names.LOW_CENSUS_DECENNIAL_AREA_MEDIAN_INCOME_PERCENT_FIELD_2009
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.CENSUS_DECENNIAL_POVERTY_LESS_THAN_100_FPL_FIELD_2009
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
field_names.CENSUS_DECENNIAL_UNEMPLOYMENT_FIELD_2009
+ field_names.ISLAND_AREAS_PERCENTILE_ADJUSTMENT_FIELD
+ field_names.PERCENTILE_FIELD_SUFFIX,
# Island areas HS degree attainment rate
field_names.CENSUS_DECENNIAL_HIGH_SCHOOL_ED_FIELD_2009,
field_names.LOW_HS_EDUCATION_LOW_HIGHER_ED_FIELD,
field_names.ISLAND_AREAS_LOW_HS_EDUCATION_FIELD,
field_names.WASTEWATER_FIELD + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX,
field_names.COLLEGE_NON_ATTENDANCE_FIELD,
field_names.COLLEGE_ATTENDANCE_FIELD,
]

View file

@ -1,17 +1,26 @@
import functools
from collections import namedtuple
from dataclasses import dataclass
from typing import List
import numpy as np
import pandas as pd
from data_pipeline.etl.base import ExtractTransformLoad
from data_pipeline.etl.score import constants
from data_pipeline.etl.sources.census_acs.etl import CensusACSETL
from data_pipeline.etl.sources.dot_travel_composite.etl import (
TravelCompositeETL,
)
from data_pipeline.etl.sources.eamlis.etl import AbandonedMineETL
from data_pipeline.etl.sources.fsf_flood_risk.etl import FloodRiskETL
from data_pipeline.etl.sources.fsf_wildfire_risk.etl import WildfireRiskETL
from data_pipeline.etl.sources.national_risk_index.etl import (
NationalRiskIndexETL,
)
from data_pipeline.score.score_runner import ScoreRunner
from data_pipeline.etl.sources.nlcd_nature_deprived.etl import NatureDeprivedETL
from data_pipeline.etl.sources.tribal_overlap.etl import TribalOverlapETL
from data_pipeline.etl.sources.us_army_fuds.etl import USArmyFUDS
from data_pipeline.score import field_names
from data_pipeline.etl.score import constants
from data_pipeline.score.score_runner import ScoreRunner
from data_pipeline.utils import get_module_logger
logger = get_module_logger(__name__)
@ -24,7 +33,7 @@ class ScoreETL(ExtractTransformLoad):
# dataframes
self.df: pd.DataFrame
self.ejscreen_df: pd.DataFrame
self.census_df: pd.DataFrame
self.census_acs_df: pd.DataFrame
self.hud_housing_df: pd.DataFrame
self.cdc_places_df: pd.DataFrame
self.census_acs_median_incomes_df: pd.DataFrame
@ -32,18 +41,25 @@ class ScoreETL(ExtractTransformLoad):
self.doe_energy_burden_df: pd.DataFrame
self.national_risk_index_df: pd.DataFrame
self.geocorr_urban_rural_df: pd.DataFrame
self.persistent_poverty_df: pd.DataFrame
self.census_decennial_df: pd.DataFrame
self.census_2010_df: pd.DataFrame
self.child_opportunity_index_df: pd.DataFrame
self.national_tract_df: pd.DataFrame
self.hrs_df: pd.DataFrame
self.dot_travel_disadvantage_df: pd.DataFrame
self.fsf_flood_df: pd.DataFrame
self.fsf_fire_df: pd.DataFrame
self.nature_deprived_df: pd.DataFrame
self.eamlis_df: pd.DataFrame
self.fuds_df: pd.DataFrame
self.tribal_overlap_df: pd.DataFrame
self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS: List[str] = []
def extract(self) -> None:
logger.info("Loading data sets from disk.")
# EJSCreen csv Load
ejscreen_csv = (
constants.DATA_PATH / "dataset" / "ejscreen_2019" / "usa.csv"
)
ejscreen_csv = constants.DATA_PATH / "dataset" / "ejscreen" / "usa.csv"
self.ejscreen_df = pd.read_csv(
ejscreen_csv,
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
@ -51,14 +67,7 @@ class ScoreETL(ExtractTransformLoad):
)
# Load census data
census_csv = (
constants.DATA_PATH / "dataset" / "census_acs_2019" / "usa.csv"
)
self.census_df = pd.read_csv(
census_csv,
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
)
self.census_acs_df = CensusACSETL.get_data_frame()
# Load HUD housing data
hud_housing_csv = (
@ -116,6 +125,27 @@ class ScoreETL(ExtractTransformLoad):
# Load FEMA national risk index data
self.national_risk_index_df = NationalRiskIndexETL.get_data_frame()
# Load DOT Travel Disadvantage
self.dot_travel_disadvantage_df = TravelCompositeETL.get_data_frame()
# Load fire risk data
self.fsf_fire_df = WildfireRiskETL.get_data_frame()
# Load flood risk data
self.fsf_flood_df = FloodRiskETL.get_data_frame()
# Load NLCD Nature-Deprived Communities data
self.nature_deprived_df = NatureDeprivedETL.get_data_frame()
# Load eAMLIS dataset
self.eamlis_df = AbandonedMineETL.get_data_frame()
# Load FUDS dataset
self.fuds_df = USArmyFUDS.get_data_frame()
# Load Tribal overlap dataset
self.tribal_overlap_df = TribalOverlapETL.get_data_frame()
# Load GeoCorr Urban Rural Map
geocorr_urban_rural_csv = (
constants.DATA_PATH / "dataset" / "geocorr" / "usa.csv"
@ -126,16 +156,6 @@ class ScoreETL(ExtractTransformLoad):
low_memory=False,
)
# Load persistent poverty
persistent_poverty_csv = (
constants.DATA_PATH / "dataset" / "persistent_poverty" / "usa.csv"
)
self.persistent_poverty_df = pd.read_csv(
persistent_poverty_csv,
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
)
# Load decennial census data
census_decennial_csv = (
constants.DATA_PATH
@ -159,19 +179,26 @@ class ScoreETL(ExtractTransformLoad):
low_memory=False,
)
# Load COI data
child_opportunity_index_csv = (
constants.DATA_PATH
/ "dataset"
/ "child_opportunity_index"
/ "usa.csv"
# Load HRS data
hrs_csv = (
constants.DATA_PATH / "dataset" / "historic_redlining" / "usa.csv"
)
self.child_opportunity_index_df = pd.read_csv(
child_opportunity_index_csv,
self.hrs_df = pd.read_csv(
hrs_csv,
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
)
national_tract_csv = constants.DATA_CENSUS_CSV_FILE_PATH
self.national_tract_df = pd.read_csv(
national_tract_csv,
names=[self.GEOID_TRACT_FIELD_NAME],
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
header=None,
)
def _join_tract_dfs(self, census_tract_dfs: list) -> pd.DataFrame:
logger.info("Joining Census Tract dataframes")
@ -253,6 +280,7 @@ class ScoreETL(ExtractTransformLoad):
df: pd.DataFrame,
input_column_name: str,
output_column_name_root: str,
drop_tracts: list = None,
ascending: bool = True,
) -> pd.DataFrame:
"""Creates percentiles.
@ -262,98 +290,46 @@ class ScoreETL(ExtractTransformLoad):
E.g., "PM2.5 exposure (percentile)".
This will be for the entire country.
For an "apples-to-apples" comparison of urban tracts to other urban tracts,
and compare rural tracts to other rural tracts.
This percentile will be created and returned as
f"{output_column_name_root}{field_names.PERCENTILE_URBAN_RURAL_FIELD_SUFFIX}".
E.g., "PM2.5 exposure (percentile urban/rural)".
This field exists for every tract, but for urban tracts this value will be the
percentile compared to other urban tracts, and for rural tracts this value
will be the percentile compared to other rural tracts.
Specific methdology:
1. Decide a methodology for confirming whether a tract counts as urban or
rural. Currently in the codebase, we use Geocorr to identify the % rural of
a tract, and mark the tract as rural if the percentage is >50% and urban
otherwise. This may or may not be the right methodology.
2. Once tracts are marked as urban or rural, create one percentile rank
that only ranks urban tracts, and one percentile rank that only ranks rural
tracts.
3. Combine into a single field.
`output_column_name_root` is different from `input_column_name` to enable the
reverse percentile use case. In that use case, `input_column_name` may be
something like "3rd grade reading proficiency" and `output_column_name_root`
may be something like "Low 3rd grade reading proficiency".
"""
if (
output_column_name_root
!= field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
):
# We have two potential options for assessing how to calculate percentiles.
# For the vast majority of columns, we will simply calculate percentiles overall.
# However, for Linguistic Isolation and Agricultural Value Loss, there exist conditions
# for which we drop out tracts from consideration in the percentile. More details on those
# are below, for them, we provide a list of tracts to not include.
# Because of the fancy transformations below, I have removed the urban / rural percentiles,
# which are now deprecated.
if not drop_tracts:
# Create the "basic" percentile.
## note: I believe this is less performant than if we made a bunch of these PFS columns
## and then concatenated the list. For the refactor!
df[
f"{output_column_name_root}"
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
] = df[input_column_name].rank(pct=True, ascending=ascending)
else:
# For agricultural loss, we are using whether there is value at all to determine percentile and then
# filling places where the value is False with 0
tmp_series = df[input_column_name].where(
~df[field_names.GEOID_TRACT_FIELD].isin(drop_tracts),
np.nan,
)
logger.info(
f"Creating special case column for percentiles from {input_column_name}"
)
df[
f"{output_column_name_root}"
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
] = (
df.where(
df[field_names.AGRICULTURAL_VALUE_BOOL_FIELD].astype(float)
== 1.0
)[input_column_name]
.rank(ascending=ascending, pct=True)
.fillna(
df[field_names.AGRICULTURAL_VALUE_BOOL_FIELD].astype(float)
)
)
] = tmp_series.rank(ascending=ascending, pct=True)
# Create the urban/rural percentiles.
urban_rural_percentile_fields_to_combine = []
for (urban_or_rural_string, urban_heuristic_bool) in [
("urban", True),
("rural", False),
]:
# Create a field with only those values
this_category_only_value_field = (
f"{input_column_name} (value {urban_or_rural_string} only)"
)
df[this_category_only_value_field] = np.where(
df[field_names.URBAN_HEURISTIC_FIELD] == urban_heuristic_bool,
df[input_column_name],
None,
)
# Calculate the percentile for only this category
this_category_only_percentile_field = (
f"{output_column_name_root} "
f"(percentile {urban_or_rural_string} only)"
)
df[this_category_only_percentile_field] = df[
this_category_only_value_field
].rank(
pct=True,
# Set ascending to the parameter value.
ascending=ascending,
)
# Add the field name to this list. Later, we'll combine this list.
urban_rural_percentile_fields_to_combine.append(
this_category_only_percentile_field
)
# Combine both urban and rural into one field:
df[
f"{output_column_name_root}{field_names.PERCENTILE_URBAN_RURAL_FIELD_SUFFIX}"
] = df[urban_rural_percentile_fields_to_combine].mean(
axis=1, skipna=True
)
# Check that "drop tracts" were dropped (quicker than creating a fixture?)
assert df[df[field_names.GEOID_TRACT_FIELD].isin(drop_tracts)][
f"{output_column_name_root}"
f"{field_names.PERCENTILE_FIELD_SUFFIX}"
].isna().sum() == len(drop_tracts), "Not all tracts were dropped"
return df
@ -363,19 +339,25 @@ class ScoreETL(ExtractTransformLoad):
# Join all the data sources that use census tracts
census_tract_dfs = [
self.census_df,
self.census_acs_df,
self.hud_housing_df,
self.cdc_places_df,
self.cdc_life_expectancy_df,
self.doe_energy_burden_df,
self.ejscreen_df,
self.geocorr_urban_rural_df,
self.persistent_poverty_df,
self.national_risk_index_df,
self.census_acs_median_incomes_df,
self.census_decennial_df,
self.census_2010_df,
self.child_opportunity_index_df,
self.hrs_df,
self.dot_travel_disadvantage_df,
self.fsf_flood_df,
self.fsf_fire_df,
self.nature_deprived_df,
self.eamlis_df,
self.fuds_df,
self.tribal_overlap_df,
]
# Sanity check each data frame before merging.
@ -384,8 +366,22 @@ class ScoreETL(ExtractTransformLoad):
census_tract_df = self._join_tract_dfs(census_tract_dfs)
# If GEOID10s are read as numbers instead of strings, the initial 0 is dropped,
# and then we get too many CBG rows (one for 012345 and one for 12345).
# Drop tracts that don't exist in the 2010 tracts
pre_join_len = census_tract_df[field_names.GEOID_TRACT_FIELD].nunique()
census_tract_df = census_tract_df.merge(
self.national_tract_df,
on="GEOID10_TRACT",
how="inner",
)
assert (
census_tract_df.shape[0] <= pre_join_len
), "Join against national tract list ADDED rows"
logger.info(
"Dropped %s tracts not in the 2010 tract data",
pre_join_len
- census_tract_df[field_names.GEOID_TRACT_FIELD].nunique(),
)
# Now sanity-check the merged df.
self._census_tract_df_sanity_check(
@ -405,8 +401,29 @@ class ScoreETL(ExtractTransformLoad):
df[field_names.MEDIAN_INCOME_FIELD] / df[field_names.AMI_FIELD]
)
self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS = [
field_names.PERCENT_BLACK_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_ASIAN_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_HAWAIIAN_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_HISPANIC_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
field_names.PERCENT_OTHER_RACE_FIELD_NAME
+ field_names.ISLAND_AREA_BACKFILL_SUFFIX,
]
# Donut columns get added later
numeric_columns = [
field_names.HOUSING_BURDEN_FIELD,
field_names.NO_KITCHEN_OR_INDOOR_PLUMBING_FIELD,
field_names.TOTAL_POP_FIELD,
field_names.MEDIAN_INCOME_AS_PERCENT_OF_STATE_FIELD,
field_names.ASTHMA_FIELD,
@ -453,27 +470,55 @@ class ScoreETL(ExtractTransformLoad):
field_names.CENSUS_UNEMPLOYMENT_FIELD_2010,
field_names.CENSUS_POVERTY_LESS_THAN_100_FPL_FIELD_2010,
field_names.CENSUS_DECENNIAL_TOTAL_POPULATION_FIELD_2009,
field_names.EXTREME_HEAT_FIELD,
field_names.HEALTHY_FOOD_FIELD,
field_names.IMPENETRABLE_SURFACES_FIELD,
# We have to pass this boolean here in order to include it in ag value loss percentiles.
field_names.AGRICULTURAL_VALUE_BOOL_FIELD,
]
field_names.UST_FIELD,
field_names.DOT_TRAVEL_BURDEN_FIELD,
field_names.FUTURE_FLOOD_RISK_FIELD,
field_names.FUTURE_WILDFIRE_RISK_FIELD,
field_names.TRACT_PERCENT_NON_NATURAL_FIELD_NAME,
field_names.POVERTY_LESS_THAN_200_FPL_IMPUTED_FIELD,
field_names.PERCENT_BLACK_FIELD_NAME,
field_names.PERCENT_AMERICAN_INDIAN_FIELD_NAME,
field_names.PERCENT_ASIAN_FIELD_NAME,
field_names.PERCENT_HAWAIIAN_FIELD_NAME,
field_names.PERCENT_TWO_OR_MORE_RACES_FIELD_NAME,
field_names.PERCENT_NON_HISPANIC_WHITE_FIELD_NAME,
field_names.PERCENT_HISPANIC_FIELD_NAME,
field_names.PERCENT_OTHER_RACE_FIELD_NAME,
field_names.PERCENT_AGE_UNDER_10,
field_names.PERCENT_AGE_10_TO_64,
field_names.PERCENT_AGE_OVER_64,
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT,
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_AK,
field_names.COUNT_OF_TRIBAL_AREAS_IN_TRACT_CONUS,
field_names.PERCENT_OF_TRIBAL_AREA_IN_TRACT_DISPLAY,
] + self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS
non_numeric_columns = [
self.GEOID_TRACT_FIELD_NAME,
field_names.PERSISTENT_POVERTY_FIELD,
field_names.TRACT_ELIGIBLE_FOR_NONNATURAL_THRESHOLD,
field_names.AGRICULTURAL_VALUE_BOOL_FIELD,
field_names.NAMES_OF_TRIBAL_AREAS_IN_TRACT,
]
boolean_columns = [
field_names.AML_BOOLEAN,
field_names.IMPUTED_INCOME_FLAG_FIELD_NAME,
field_names.ELIGIBLE_FUDS_BINARY_FIELD_NAME,
field_names.HISTORIC_REDLINING_SCORE_EXCEEDED,
field_names.IS_TRIBAL_DAC,
]
# For some columns, high values are "good", so we want to reverse the percentile
# so that high values are "bad" and any scoring logic can still check if it's
# >= some threshold.
# Note that we must use dataclass here instead of namedtuples on account of pylint
# TODO: Add more fields here.
# https://github.com/usds/justice40-tool/issues/970
ReversePercentile = namedtuple(
typename="ReversePercentile",
field_names=["field_name", "low_field_name"],
)
@dataclass
class ReversePercentile:
field_name: str
low_field_name: str
reverse_percentiles = [
# This dictionary follows the format:
# <field name> : <field name for low values>
@ -481,10 +526,6 @@ class ScoreETL(ExtractTransformLoad):
# This low field will not exist yet, it is only calculated for the
# percentile.
# TODO: This will come from the YAML dataset config
ReversePercentile(
field_name=field_names.READING_FIELD,
low_field_name=field_names.LOW_READING_FIELD,
),
ReversePercentile(
field_name=field_names.MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
low_field_name=field_names.LOW_MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD,
@ -503,40 +544,90 @@ class ScoreETL(ExtractTransformLoad):
non_numeric_columns
+ numeric_columns
+ [rp.field_name for rp in reverse_percentiles]
+ boolean_columns
)
df_copy = df[columns_to_keep].copy()
assert len(numeric_columns) == len(
set(numeric_columns)
), "You have a double-entered column in the numeric columns list"
df_copy[numeric_columns] = df_copy[numeric_columns].apply(pd.to_numeric)
# coerce all booleans to bools preserving nan character
# since this is a boolean, need to use `None`
for col in boolean_columns:
tmp = df_copy[col].copy()
df_copy[col] = np.where(tmp.notna(), tmp.astype(bool), None)
logger.info(f"{col} contains {df_copy[col].isna().sum()} nulls.")
# Convert all columns to numeric and do math
# Note that we have a few special conditions here and we handle them explicitly.
# For *Linguistic Isolation*, we do NOT want to include Puerto Rico in the percentile
# calculation. This is because linguistic isolation as a category doesn't make much sense
# in Puerto Rico, where Spanish is a recognized language. Thus, we construct a list
# of tracts to drop from the percentile calculation.
#
# For *Expected Agricultural Loss*, we only want to include in the percentile tracts
# in which there is some agricultural value. This helps us adjust the data such that we have
# the ability to discern which tracts truly are at the 90th percentile, since many tracts have 0 value.
#
# For *Non-Natural Space*, we may only want to include tracts that have at least 35 acreas, I think. This will
# get rid of tracts that we think are aberrations statistically. Right now, we have left this out
# pending ground-truthing.
#
# For *Traffic Barriers*, we want to exclude low population tracts, which may have high burden because they are
# low population alone. We set this low population constant in the if statement.
for numeric_column in numeric_columns:
drop_tracts = []
if (
numeric_column
== field_names.EXPECTED_AGRICULTURE_LOSS_RATE_FIELD
):
drop_tracts = df_copy[
~df_copy[field_names.AGRICULTURAL_VALUE_BOOL_FIELD]
.astype(bool)
.fillna(False)
][field_names.GEOID_TRACT_FIELD].to_list()
logger.info(
f"Dropping {len(drop_tracts)} tracts from Agricultural Value Loss"
)
elif numeric_column == field_names.LINGUISTIC_ISO_FIELD:
drop_tracts = df_copy[
# 72 is the FIPS code for Puerto Rico
df_copy[field_names.GEOID_TRACT_FIELD].str.startswith("72")
][field_names.GEOID_TRACT_FIELD].to_list()
logger.info(
f"Dropping {len(drop_tracts)} tracts from Linguistic Isolation"
)
elif numeric_column in [
field_names.DOT_TRAVEL_BURDEN_FIELD,
field_names.EXPECTED_POPULATION_LOSS_RATE_FIELD,
]:
# Not having any people appears to be correlated with transit burden, but also doesn't represent
# on the ground need. For now, we remove these tracts from the percentile calculation.ß
# Similarly, we want to exclude low population tracts from FEMA's index
low_population = 20
drop_tracts = df_copy[
df_copy[field_names.TOTAL_POP_FIELD].fillna(0)
<= low_population
][field_names.GEOID_TRACT_FIELD].to_list()
logger.info(
f"Dropping {len(drop_tracts)} tracts from DOT traffic burden"
)
df_copy = self._add_percentiles_to_df(
df=df_copy,
input_column_name=numeric_column,
# For this use case, the input name and output name root are the same.
output_column_name_root=numeric_column,
ascending=True,
drop_tracts=drop_tracts,
)
# Min-max normalization:
# (
# Observed value
# - minimum of all values
# )
# divided by
# (
# Maximum of all values
# - minimum of all values
# )
min_value = df_copy[numeric_column].min(skipna=True)
max_value = df_copy[numeric_column].max(skipna=True)
df_copy[f"{numeric_column}{field_names.MIN_MAX_FIELD_SUFFIX}"] = (
df_copy[numeric_column] - min_value
) / (max_value - min_value)
# Create reversed percentiles for these fields
for reverse_percentile in reverse_percentiles:
# Calculate reverse percentiles
@ -566,6 +657,32 @@ class ScoreETL(ExtractTransformLoad):
return df_copy
@staticmethod
def _get_island_areas(df: pd.DataFrame) -> pd.Series:
return (
df[field_names.GEOID_TRACT_FIELD]
.str[:2]
.isin(constants.TILES_ISLAND_AREA_FIPS_CODES)
)
def _backfill_island_demographics(self, df: pd.DataFrame) -> pd.DataFrame:
logger.info("Backfilling island demographic data")
island_index = self._get_island_areas(df)
for backfill_field_name in self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS:
actual_field_name = backfill_field_name.replace(
field_names.ISLAND_AREA_BACKFILL_SUFFIX, ""
)
df.loc[island_index, actual_field_name] = df.loc[
island_index, backfill_field_name
]
df = df.drop(columns=self.ISLAND_DEMOGRAPHIC_BACKFILL_FIELDS)
df.loc[island_index, field_names.TOTAL_POP_FIELD] = df.loc[
island_index, field_names.COMBINED_CENSUS_TOTAL_POPULATION_2010
]
return df
def transform(self) -> None:
logger.info("Transforming Score Data")
@ -575,8 +692,13 @@ class ScoreETL(ExtractTransformLoad):
# calculate scores
self.df = ScoreRunner(df=self.df).calculate_scores()
# We add island demographic data since it doesn't matter to the score anyway
self.df = self._backfill_island_demographics(self.df)
def load(self) -> None:
logger.info("Saving Score CSV")
logger.info(
f"Saving Score CSV to {constants.DATA_SCORE_CSV_FULL_FILE_PATH}."
)
constants.DATA_SCORE_CSV_FULL_DIR.mkdir(parents=True, exist_ok=True)
self.df.to_csv(constants.DATA_SCORE_CSV_FULL_FILE_PATH, index=False)

View file

@ -1,24 +1,20 @@
import concurrent.futures
import math
import os
import geopandas as gpd
import numpy as np
import pandas as pd
import geopandas as gpd
from data_pipeline.content.schemas.download_schemas import CSVConfig
from data_pipeline.etl.base import ExtractTransformLoad
from data_pipeline.etl.score import constants
from data_pipeline.etl.sources.census.etl_utils import (
check_census_data_source,
)
from data_pipeline.etl.score.etl_utils import check_score_data_source
from data_pipeline.etl.sources.census.etl_utils import check_census_data_source
from data_pipeline.score import field_names
from data_pipeline.content.schemas.download_schemas import CSVConfig
from data_pipeline.utils import (
get_module_logger,
zip_files,
load_yaml_dict_from_file,
load_dict_from_yaml_object_fields,
)
from data_pipeline.utils import get_module_logger
from data_pipeline.utils import load_dict_from_yaml_object_fields
from data_pipeline.utils import load_yaml_dict_from_file
from data_pipeline.utils import zip_files
logger = get_module_logger(__name__)
@ -41,23 +37,25 @@ class GeoScoreETL(ExtractTransformLoad):
self.SCORE_CSV_PATH = self.DATA_PATH / "score" / "csv"
self.TILE_SCORE_CSV = self.SCORE_CSV_PATH / "tiles" / "usa.csv"
self.DATA_SOURCE = data_source
self.CENSUS_USA_GEOJSON = (
self.DATA_PATH / "census" / "geojson" / "us.json"
)
# Import the shortened name for Score M percentile ("SM_PFS") that's used on the
# tiles.
# Import the shortened name for Score N to be used on tiles.
# We should no longer be using PFS
## TODO: We really should not have this any longer changing
self.TARGET_SCORE_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
field_names.SCORE_M + field_names.PERCENTILE_FIELD_SUFFIX
field_names.FINAL_SCORE_N_BOOLEAN
]
self.TARGET_SCORE_RENAME_TO = "M_SCORE"
self.TARGET_SCORE_RENAME_TO = "SCORE"
# Import the shortened name for tract ("GTF") that's used on the tiles.
self.TRACT_SHORT_FIELD = constants.TILES_SCORE_COLUMNS[
field_names.GEOID_TRACT_FIELD
]
self.GEOMETRY_FIELD_NAME = "geometry"
self.LAND_FIELD_NAME = "ALAND10"
# We will adjust this upwards while there is some fractional value
# in the score. This is a starting value.
@ -84,17 +82,28 @@ class GeoScoreETL(ExtractTransformLoad):
)
logger.info("Reading US GeoJSON (~6 minutes)")
self.geojson_usa_df = gpd.read_file(
full_geojson_usa_df = gpd.read_file(
self.CENSUS_USA_GEOJSON,
dtype={self.GEOID_FIELD_NAME: "string"},
usecols=[self.GEOID_FIELD_NAME, self.GEOMETRY_FIELD_NAME],
usecols=[
self.GEOID_FIELD_NAME,
self.GEOMETRY_FIELD_NAME,
self.LAND_FIELD_NAME,
],
low_memory=False,
)
# We only want to keep tracts to visualize that have non-0 land
self.geojson_usa_df = full_geojson_usa_df[
full_geojson_usa_df[self.LAND_FIELD_NAME] > 0
]
logger.info("Reading score CSV")
self.score_usa_df = pd.read_csv(
self.TILE_SCORE_CSV,
dtype={self.TRACT_SHORT_FIELD: "string"},
dtype={
self.TRACT_SHORT_FIELD: str,
},
low_memory=False,
)
@ -134,7 +143,7 @@ class GeoScoreETL(ExtractTransformLoad):
columns={self.TARGET_SCORE_SHORT_FIELD: self.TARGET_SCORE_RENAME_TO}
)
logger.info("Converting to geojson into tracts")
logger.info("Converting geojson into geodf with tracts")
usa_tracts = gpd.GeoDataFrame(
usa_tracts,
columns=[
@ -270,8 +279,10 @@ class GeoScoreETL(ExtractTransformLoad):
# Create separate threads to run each write to disk.
def write_high_to_file():
logger.info("Writing usa-high (~9 minutes)")
self.geojson_score_usa_high.to_file(
filename=self.SCORE_HIGH_GEOJSON, driver="GeoJSON"
filename=self.SCORE_HIGH_GEOJSON,
driver="GeoJSON",
)
logger.info("Completed writing usa-high")
@ -294,7 +305,6 @@ class GeoScoreETL(ExtractTransformLoad):
pd.Series(codebook)
.reset_index()
.rename(
# kept as strings because no downstream impacts
columns={
0: internal_column_name_field,
"index": shapefile_column_field,

View file

@ -1,35 +1,29 @@
from pathlib import Path
import json
from numpy import float64
from pathlib import Path
import numpy as np
import pandas as pd
from data_pipeline.content.schemas.download_schemas import (
CSVConfig,
CodebookConfig,
ExcelConfig,
)
from data_pipeline.content.schemas.download_schemas import CodebookConfig
from data_pipeline.content.schemas.download_schemas import CSVConfig
from data_pipeline.content.schemas.download_schemas import ExcelConfig
from data_pipeline.etl.base import ExtractTransformLoad
from data_pipeline.etl.score.etl_utils import floor_series, create_codebook
from data_pipeline.utils import (
get_module_logger,
zip_files,
load_yaml_dict_from_file,
column_list_from_yaml_object_fields,
load_dict_from_yaml_object_fields,
)
from data_pipeline.etl.score.etl_utils import create_codebook
from data_pipeline.etl.score.etl_utils import floor_series
from data_pipeline.etl.sources.census.etl_utils import check_census_data_source
from data_pipeline.score import field_names
from data_pipeline.utils import column_list_from_yaml_object_fields
from data_pipeline.utils import get_module_logger
from data_pipeline.utils import load_dict_from_yaml_object_fields
from data_pipeline.utils import load_yaml_dict_from_file
from data_pipeline.utils import zip_files
from numpy import float64
from data_pipeline.etl.sources.census.etl_utils import (
check_census_data_source,
)
from . import constants
logger = get_module_logger(__name__)
# Define the DAC variable
DISADVANTAGED_COMMUNITIES_FIELD = field_names.SCORE_M_COMMUNITIES
DISADVANTAGED_COMMUNITIES_FIELD = field_names.SCORE_N_COMMUNITIES
class PostScoreETL(ExtractTransformLoad):
@ -45,7 +39,6 @@ class PostScoreETL(ExtractTransformLoad):
self.input_counties_df: pd.DataFrame
self.input_states_df: pd.DataFrame
self.input_score_df: pd.DataFrame
self.input_national_tract_df: pd.DataFrame
self.output_score_county_state_merged_df: pd.DataFrame
self.output_score_tiles_df: pd.DataFrame
@ -92,7 +85,9 @@ class PostScoreETL(ExtractTransformLoad):
def _extract_score(self, score_path: Path) -> pd.DataFrame:
logger.info("Reading Score CSV")
df = pd.read_csv(
score_path, dtype={self.GEOID_TRACT_FIELD_NAME: "string"}
score_path,
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
)
# Convert total population to an int
@ -102,18 +97,6 @@ class PostScoreETL(ExtractTransformLoad):
return df
def _extract_national_tract(
self, national_tract_path: Path
) -> pd.DataFrame:
logger.info("Reading national tract file")
return pd.read_csv(
national_tract_path,
names=[self.GEOID_TRACT_FIELD_NAME],
dtype={self.GEOID_TRACT_FIELD_NAME: "string"},
low_memory=False,
header=None,
)
def extract(self) -> None:
logger.info("Starting Extraction")
@ -136,9 +119,6 @@ class PostScoreETL(ExtractTransformLoad):
self.input_score_df = self._extract_score(
constants.DATA_SCORE_CSV_FULL_FILE_PATH
)
self.input_national_tract_df = self._extract_national_tract(
constants.DATA_CENSUS_CSV_FILE_PATH
)
def _transform_counties(
self, initial_counties_df: pd.DataFrame
@ -185,7 +165,6 @@ class PostScoreETL(ExtractTransformLoad):
def _create_score_data(
self,
national_tract_df: pd.DataFrame,
counties_df: pd.DataFrame,
states_df: pd.DataFrame,
score_df: pd.DataFrame,
@ -217,28 +196,11 @@ class PostScoreETL(ExtractTransformLoad):
right_on=self.STATE_CODE_COLUMN,
how="left",
)
# check if there are census tracts without score
logger.info("Removing tract rows without score")
# merge census tracts with score
merged_df = national_tract_df.merge(
score_county_state_merged,
on=self.GEOID_TRACT_FIELD_NAME,
how="left",
)
# recast population to integer
score_county_state_merged["Total population"] = (
merged_df["Total population"].fillna(0).astype(int)
)
de_duplicated_df = merged_df.dropna(
subset=[DISADVANTAGED_COMMUNITIES_FIELD]
)
assert score_county_merged[
self.GEOID_TRACT_FIELD_NAME
].is_unique, "Merging state/county data introduced duplicate rows"
# set the score to the new df
return de_duplicated_df
return score_county_state_merged
def _create_tile_data(
self,
@ -254,8 +216,8 @@ class PostScoreETL(ExtractTransformLoad):
tiles_score_column_titles
].copy()
# Currently, we do not want USVI or Guam on the map, so this will drop all
# rows with the FIPS codes (first two digits of the census tract)
# We may not want some states/territories on the map, so this will drop all
# rows with those FIPS codes (first two digits of the census tract)
logger.info(
f"Dropping specified FIPS codes from tile data: {constants.DROP_FIPS_CODES}"
)
@ -269,16 +231,15 @@ class PostScoreETL(ExtractTransformLoad):
score_tiles = score_tiles[
~score_tiles[field_names.GEOID_TRACT_FIELD].isin(tracts_to_drop)
]
score_tiles[constants.TILES_SCORE_FLOAT_COLUMNS] = score_tiles[
constants.TILES_SCORE_FLOAT_COLUMNS
].apply(
func=lambda series: floor_series(
series=series,
number_of_decimals=constants.TILES_ROUND_NUM_DECIMALS,
),
axis=0,
)
float_cols = [
col
for col, col_dtype in score_tiles.dtypes.items()
if col_dtype == np.dtype("float64")
]
scale_factor = 10**constants.TILES_ROUND_NUM_DECIMALS
score_tiles[float_cols] = (
score_tiles[float_cols] * scale_factor
).apply(np.floor) / scale_factor
logger.info("Adding fields for island areas and Puerto Rico")
# The below operation constructs variables for the front end.
@ -427,7 +388,6 @@ class PostScoreETL(ExtractTransformLoad):
transformed_score = self._transform_score(self.input_score_df)
output_score_county_state_merged_df = self._create_score_data(
self.input_national_tract_df,
transformed_counties,
transformed_states,
transformed_score,
@ -521,8 +481,6 @@ class PostScoreETL(ExtractTransformLoad):
score_tiles_df.to_csv(tile_score_path, index=False, encoding="utf-8")
def _load_downloadable_zip(self, downloadable_info_path: Path) -> None:
logger.info("Saving Downloadable CSV")
downloadable_info_path.mkdir(parents=True, exist_ok=True)
csv_path = constants.SCORE_DOWNLOADABLE_CSV_FILE_PATH
excel_path = constants.SCORE_DOWNLOADABLE_EXCEL_FILE_PATH
@ -583,6 +541,22 @@ class PostScoreETL(ExtractTransformLoad):
"fields"
],
)
assert codebook_df["csv_label"].equals(codebook_df["excel_label"]), (
"CSV and Excel differ. If that's intentional, "
"remove this assertion. Otherwise, fix it."
)
# Check the codebook to make sure it matches the download files
assert not set(codebook_df["csv_label"].dropna()).difference(
downloadable_df.columns
), "Codebook is missing columns from downloadable files"
assert (
len(
downloadable_df.columns.difference(
set(codebook_df["csv_label"])
)
)
== 0
), "Codebook has columns the downloadable files do not"
# load codebook to disk
codebook_df.to_csv(codebook_path, index=False)

View file

@ -1,16 +1,21 @@
import os
import sys
from pathlib import Path
import typing
from collections import namedtuple
from pathlib import Path
import numpy as np
import pandas as pd
from data_pipeline.config import settings
from data_pipeline.utils import (
download_file_from_url,
get_module_logger,
)
from data_pipeline.etl.score.constants import TILES_ALASKA_AND_HAWAII_FIPS_CODE
from data_pipeline.etl.score.constants import TILES_CONTINENTAL_US_FIPS_CODE
from data_pipeline.etl.score.constants import TILES_ISLAND_AREA_FIPS_CODES
from data_pipeline.etl.score.constants import TILES_PUERTO_RICO_FIPS_CODE
from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes
from data_pipeline.score import field_names
from data_pipeline.utils import download_file_from_url
from data_pipeline.utils import get_module_logger
from . import constants
logger = get_module_logger(__name__)
@ -91,7 +96,7 @@ def floor_series(series: pd.Series, number_of_decimals: int) -> pd.Series:
if series.isin(unacceptable_values).any():
series.replace(mapping, regex=False, inplace=True)
multiplication_factor = 10 ** number_of_decimals
multiplication_factor = 10**number_of_decimals
# In order to safely cast NaNs
# First coerce series to float type: series.astype(float)
@ -305,3 +310,106 @@ def create_codebook(
return merged_codebook_df[constants.CODEBOOK_COLUMNS].rename(
columns={constants.CEJST_SCORE_COLUMN_NAME: "Description"}
)
# pylint: disable=too-many-arguments
def compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes: typing.List[str],
continental_us_expected: bool = True,
alaska_and_hawaii_expected: bool = True,
puerto_rico_expected: bool = True,
island_areas_expected: bool = True,
additional_fips_codes_not_expected: typing.List[str] = None,
dataset_name: str = None,
) -> None:
"""Check whether a list of state/territory FIPS codes match expectations.
Args:
actual_state_fips_codes (List of str): Actual state codes observed in data
continental_us_expected (bool, optional): Do you expect the continental nation
(DC & states except for Alaska and Hawaii) to be represented in data?
alaska_and_hawaii_expected (bool, optional): Do you expect Alaska and Hawaii
to be represented in the data? Note: if only *1* of Alaska and Hawaii are
not expected to be included, do not use this argument -- instead,
use `additional_fips_codes_not_expected` for the 1 state you expected to
be missing.
puerto_rico_expected (bool, optional): Do you expect PR to be represented in data?
island_areas_expected (bool, optional): Do you expect Island Areas to be represented in
data?
additional_fips_codes_not_expected (List of str, optional): Additional state codes
not expected in the data. For example, the data may be known to be missing
data from Maine and Wisconsin.
dataset_name (str, optional): The name of the data set, used only in printing an
error message. (This is helpful for debugging during parallel etl runs.)
Returns:
None: Does not return any values.
Raises:
ValueError: if lists do not match expectations.
"""
# Setting default argument of [] here to avoid mutability problems.
if additional_fips_codes_not_expected is None:
additional_fips_codes_not_expected = []
# Cast input to a set.
actual_state_fips_codes_set = set(actual_state_fips_codes)
# Start with the list of all FIPS codes for all states and territories.
expected_states_set = set(get_state_fips_codes(settings.DATA_PATH))
# If continental US is not expected to be included, remove it from the
# expected states set.
if not continental_us_expected:
expected_states_set = expected_states_set.difference(
TILES_CONTINENTAL_US_FIPS_CODE
)
# If both Alaska and Hawaii are not expected to be included, remove them from the
# expected states set.
# Note: if only *1* of Alaska and Hawaii are not expected to be included,
# do not use this argument -- instead, use `additional_fips_codes_not_expected`
# for the 1 state you expected to be missing.
if not alaska_and_hawaii_expected:
expected_states_set = expected_states_set.difference(
TILES_ALASKA_AND_HAWAII_FIPS_CODE
)
# If Puerto Rico is not expected to be included, remove it from the expected
# states set.
if not puerto_rico_expected:
expected_states_set = expected_states_set.difference(
TILES_PUERTO_RICO_FIPS_CODE
)
# If island areas are not expected to be included, remove them from the expected
# states set.
if not island_areas_expected:
expected_states_set = expected_states_set.difference(
TILES_ISLAND_AREA_FIPS_CODES
)
# If additional FIPS codes are not expected to be included, remove them from the
# expected states set.
expected_states_set = expected_states_set.difference(
additional_fips_codes_not_expected
)
dataset_name_phrase = (
f" for dataset `{dataset_name}`" if dataset_name is not None else ""
)
if expected_states_set != actual_state_fips_codes_set:
raise ValueError(
f"The states and territories in the data{dataset_name_phrase} are not "
f"as expected.\n"
"FIPS state codes expected that are not present in the data:\n"
f"{sorted(list(expected_states_set - actual_state_fips_codes_set))}\n"
"FIPS state codes in the data that were not expected:\n"
f"{sorted(list(actual_state_fips_codes_set - expected_states_set))}\n"
)
else:
logger.info(
"Data matches expected state and territory representation"
f"{dataset_name_phrase}."
)

View file

@ -1,6 +1,8 @@
from dataclasses import dataclass, field
from dataclasses import dataclass
from dataclasses import field
from enum import Enum
from typing import List, Optional
from typing import List
from typing import Optional
class FieldType(Enum):
@ -77,7 +79,7 @@ class DatasetsConfig:
long_name: str
short_name: str
module_name: str
input_geoid_tract_field_name: str
load_fields: List[LoadField]
input_geoid_tract_field_name: Optional[str] = None
datasets: List[Dataset]

View file

@ -5,7 +5,8 @@ from pathlib import Path
import pandas as pd
import pytest
from data_pipeline import config
from data_pipeline.etl.score import etl_score_post, tests
from data_pipeline.etl.score import etl_score_post
from data_pipeline.etl.score import tests
from data_pipeline.etl.score.etl_score_post import PostScoreETL

File diff suppressed because one or more lines are too long

View file

@ -1,4 +1,4 @@
fips,state_name,state_abbreviation,region,division
01,Alabama,AL,South,East South Central
02,Alaska,AK,West,Pacific
04,Arizona,AZ,West,Mountain
04,Arizona,AZ,West,Mountain

1 fips state_name state_abbreviation region division
2 01 Alabama AL South East South Central
3 02 Alaska AK West Pacific
4 04 Arizona AZ West Mountain

View file

@ -1,7 +1,9 @@
import pandas as pd
import numpy as np
import pandas as pd
import pytest
from data_pipeline.etl.score.etl_utils import (
compare_to_list_of_expected_state_fips_codes,
)
from data_pipeline.etl.score.etl_utils import floor_series
@ -70,3 +72,181 @@ def test_floor_series():
match="Argument series must be of type pandas series, not of type list.",
):
floor_series(invalid_type, number_of_decimals=3)
def test_compare_to_list_of_expected_state_fips_codes():
# Has every state/territory/DC code
fips_codes_test_1 = [
"01",
"02",
"04",
"05",
"06",
"08",
"09",
"10",
"11",
"12",
"13",
"15",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"40",
"41",
"42",
"44",
"45",
"46",
"47",
"48",
"49",
"50",
"51",
"53",
"54",
"55",
"56",
"60",
"66",
"69",
"72",
"78",
]
# Should not raise any errors
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_1
)
# Should raise error because Puerto Rico is not expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_1,
puerto_rico_expected=False,
)
partial_expected_error_message = (
"FIPS state codes in the data that were not expected:\n['72']\n"
)
assert partial_expected_error_message in str(exception_info.value)
# Should raise error because Island Areas are not expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_1,
island_areas_expected=False,
)
partial_expected_error_message = (
"FIPS state codes in the data that were not expected:\n"
"['60', '66', '69', '78']\n"
)
assert partial_expected_error_message in str(exception_info.value)
# List missing PR and Guam
fips_codes_test_2 = [x for x in fips_codes_test_1 if x not in ["66", "72"]]
# Should raise error because all Island Areas and PR are expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_2,
)
partial_expected_error_message = (
"FIPS state codes expected that are not present in the data:\n"
"['66', '72']\n"
)
assert partial_expected_error_message in str(exception_info.value)
# Missing Maine and Wisconsin
fips_codes_test_3 = [x for x in fips_codes_test_1 if x not in ["23", "55"]]
# Should raise error because Maine and Wisconsin are expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_3,
)
partial_expected_error_message = (
"FIPS state codes expected that are not present in the data:\n"
"['23', '55']\n"
)
assert partial_expected_error_message in str(exception_info.value)
# Should not raise error because Maine and Wisconsin are expected to be missing
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_3,
additional_fips_codes_not_expected=["23", "55"],
)
# Missing the continental & AK/HI nation
fips_codes_test_4 = [
"60",
"66",
"69",
"72",
"78",
]
# Should raise error because the nation is expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_4,
)
partial_expected_error_message = (
"FIPS state codes expected that are not present in the data:\n"
"['01', '02', '04', '05', '06', '08', '09', '10', '11', '12', '13', '15', '16', "
"'17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', "
"'30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', "
"'44', '45', '46', '47', '48', '49', '50', '51', '53', '54', '55', '56']"
)
assert partial_expected_error_message in str(exception_info.value)
# Should not raise error because continental US and AK/HI is not to be missing
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_4,
continental_us_expected=False,
alaska_and_hawaii_expected=False,
)
# Missing Hawaii but not Alaska
fips_codes_test_5 = [x for x in fips_codes_test_1 if x not in ["15"]]
# Should raise error because both Hawaii and Alaska are expected
with pytest.raises(ValueError) as exception_info:
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_5,
alaska_and_hawaii_expected=True,
)
partial_expected_error_message = (
"FIPS state codes expected that are not present in the data:\n"
"['15']\n"
)
assert partial_expected_error_message in str(exception_info.value)
# Should work as expected
compare_to_list_of_expected_state_fips_codes(
actual_state_fips_codes=fips_codes_test_5,
alaska_and_hawaii_expected=True,
additional_fips_codes_not_expected=["15"],
)

View file

@ -1,14 +1,11 @@
# pylint: disable=W0212
## Above disables warning about access to underscore-prefixed methods
from importlib import reload
from pathlib import Path
import pandas.api.types as ptypes
import pandas.testing as pdt
from data_pipeline.content.schemas.download_schemas import (
CSVConfig,
)
from data_pipeline.content.schemas.download_schemas import CSVConfig
from data_pipeline.etl.score import constants
from data_pipeline.utils import load_yaml_dict_from_file
@ -67,14 +64,12 @@ def test_transform_score(etl, score_data_initial, score_transformed_expected):
# pylint: disable=too-many-arguments
def test_create_score_data(
etl,
national_tract_df,
counties_transformed_expected,
states_transformed_expected,
score_transformed_expected,
score_data_expected,
):
score_data_actual = etl._create_score_data(
national_tract_df,
counties_transformed_expected,
states_transformed_expected,
score_transformed_expected,