* Backfill population in island areas (#1882)
* Update smoketest to account for backfills (#1882)
As I wrote in the commend:
We backfill island areas with data from the 2010 census, so if THOSE tracts
have data beyond the data source, that's to be expected and is fine to pass.
If some other state or territory does though, this should fail
This ends up being a nice way of documenting that behavior i guess!
* Fixup lint issues (#1882)
* Add in race demos to 2010 census pull (#1851)
* Add backfill data to score (#1851)
* Change column name (#1851)
* Fill demos after the score (#1851)
* Add income back, adjust test (#1882)
* Apply code-review feedback (#1851)
* Add test for island area backfill (#1851)
* Fix bad rename (#1851)
* should be working, has unnecessary loggers
* removing loggers and cleaning up
* updating ejscreen tests
* adding tests and responding to PR feedback
* fixing broken smoke test
* delete smoketest docs
* Add tribal data to downloads (#1904)
* Update test pickle with current cols (#1904)
* Remove text of tribe names from GeoJSON (#1904)
* Update test data (#1904)
* Add tribal overlap to smoketests (#1904)
* Better document base on Lucas's feedback (#1835)
* Fix typo (#1835)
* Add test to verify GEOJSON matches tiles (#1835)
* Remove NOOP line (#1835)
* Move GEOJSON generation up for new smoketest (#1835)
* Fixup code format (#1835)
* Update readme for new somketest (#1835)
* working notebook
* updating notebook
* wip
* fixing broken tests
* adding tribal overlap files
* WIP
* WIP
* WIP, calculated count and names
* working
* partial cleanup
* partial cleanup
* updating field names
* fixing bug
* removing pyogrio
* removing unused imports
* updating test fixtures to be more realistic
* cleaning up notebook
* fixing black
* fixing flake8 errors
* adding tox instructions
* updating etl_score
* suppressing warning
* Use projected CRSes, ignore geom types (#1900)
I looked into this a bit, and in general the geometry type mismatch
changes very little about the calculation; we have a mix of
multipolygons and polygons. The fastest thing to do is just not keep
geom type; I did some runs with it set to both True and False, and
they're the same within 9 digits of precision. Logically we just want to
overlaps, regardless of how the actual geometries are encoded between
the frames, so we can in this case ignore the geom types and feel OKAY.
I also moved to projected CRSes, since we are actually trying to do area
calculations and so like, we should. Again, the change is small in
magnitude but logically more sound.
* Readd CDC dataset config (#1900)
* adding comments to fips code
* delete unnecessary loggers
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
* Refactor CDC life-expectancy (1554)
* Update to new tract list (#1554)
* Adjust for tests (#1848)
* Add tests for cdc_places (#1848)
* Add EJScreen tests (#1848)
* Add tests for HUD housing (#1848)
* Add tests for GeoCorr (#1848)
* Add persistent poverty tests (#1848)
* Update for sources without zips, for new validation (#1848)
* Update tests for new multi-CSV but (#1848)
Lucas updated the CDC life expectancy data to handle a bug where two
states are missing from the US Overall download. Since virtually none of
our other ETL classes download multiple CSVs directly like this, it
required a pretty invasive new mocking strategy.
* Add basic tests for nature deprived (#1848)
* Add wildfire tests (#1848)
* Add flood risk tests (#1848)
* Add DOT travel tests (#1848)
* Add historic redlining tests (#1848)
* Add tests for ME and WI (#1848)
* Update now that validation exists (#1848)
* Adjust for validation (#1848)
* Add health insurance back to cdc places (#1848)
Ooops
* Update tests with new field (#1848)
* Test for blank tract removal (#1848)
* Add tracts for clipping behavior
* Test clipping and zfill behavior (#1848)
* Fix bad test assumption (#1848)
* Simplify class, add test for tract padding (#1848)
* Fix percentage inversion, update tests (#1848)
Looking through the transformations, I noticed that we were subtracting
a percentage that is usually between 0-100 from 1 instead of 100, and so
were endind up with some surprising results. Confirmed with lucasmbrown-usds
* Add note about first street data (#1848)
This code causes no functional change to the code. It does two things:
1. Uses difference instead of - to improve code style for working with sets.
2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False.
* Remove unused persistent poverty from score (#1835)
* Test a few datasets for overlap in the final score (#1835)
* Add remaining data sources (#1853)
* Apply code-review feedback (#1835)
* Rearrange a little for readabililty (#1835)
* Add tract test (#1835)
* Add test for score values (#1835)
* Check for unmatched source tracts (#1835)
* Cleanup numeric code to plaintext (#1835)
* Make import more obvious (#1835)
* temp update
* updating with fips check
* adding check on pfs
* updating with pfs test
* Update test_tiles_smoketests.py
* Fix lint errors (#1848)
* Add column names test (#1848)
* Mark tests as smoketests (#1848)
* Move to other score-related tests (#1848)
* Recast Total threshold criteria exceeded to int (#1848)
In writing tests to verify the output of the tiles csv matches the final
score CSV, I noticed TC/Total threshold criteria exceeded was getting
cast from an int64 to a float64 in the process of PostScoreETL. I
tracked it down to the line where we merge the score dataframe with
constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the
national census CSV that don't exist in the score, so those ended up
with a Total threshhold count of np.nan, which is a float, and thereby
cast those columns to float. For the moment I just cast it back.
* No need for low memeory (#1848)
* Add additional tests of tiles.csv (#1848)
* Drop pre-2010 rows before computing score (#1848)
Note this is probably NOT the optimal place for this change; it might
make more sense for each source to filter its own tracts down to the
acceptable tract list. However, that would be a pretty invasive change,
where this is central and plenty of other things are happening in score
transform that could be moved to sources, so for today, here's where the
change will live.
* Fix typo (#1848)
* Switch from filter to inner join (#1848)
* Remove no-op lines from tiles (#1848)
* Apply feedback from review, linter (#1848)
* Check the values oeverything in the frame (#1848)
* Refactor checker class (#1848)
* Add test for state names (#1848)
* cleanup from reviewing my own code (#1848)
* Fix lint error (#1858)
* Apply Emma's feedback from review (#1848)
* Remove refs to national_df (#1848)
* Account for new, fake nullable bools in tiles (#1848)
To handle a geojson limitation, Emma converted some nullable boolean
colunms to float64 in the tiles export with the values {0.0, 1.0, nan},
giving us the same expressiveness. Sadly, this broke my assumption that
all columns between the score and tiles csvs would have the same dtypes,
so I need to account for these new, fake bools in my test.
* Use equals instead of my worse version (#1848)
* Missed a spot where we called _create_score_data (#1848)
* Update per safety (#1848)
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
* just testing that the boolean is preserved on gha
* checking drop tracts works
* adding a check to the agvalue calculation for nri
* updated with error messages
* update Python version on README; tuple typing fix
* Alaska tribal points fix (#1821)
* Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777)
Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3.
- [Release notes](https://github.com/lepture/mistune/releases)
- [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst)
- [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3)
---
updated-dependencies:
- dependency-name: mistune
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* poetry update
* initial pass of score tests
* add threshold tests
* added ses threshold (not donut, not island)
* testing suite -- stopping for the day
* added test for lead proxy indicator
* Refactor score tests to make them less verbose and more direct (#1865)
* Cleanup tests slightly before refactor (#1846)
* Refactor score calculations tests
* Feedback from review
* Refactor output tests like calculatoin tests (#1846) (#1870)
* Reorganize files (#1846)
* Switch from lru_cache to fixture scorpes (#1846)
* Add tests for all factors (#1846)
* Mark smoketests and run as part of be deply (#1846)
* Update renamed var (#1846)
* Switch from named tuple to dataclass (#1846)
This is annoying, but pylint in python3.8 was crashing parsing the named
tuple. We weren't using any namedtuple-specific features, so I made the
type a dataclass just to get pylint to behave.
* Add default timout to requests (#1846)
* Fix type (#1846)
* Fix merge mistake on poetry.lock (#1846)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com>
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>