* Add action for creating new score versions (2035)
Add a new action that allows us to manually create and upload score
versions to s3.
* Remove PR-related chat message
* Temportarily remove tile generation (#2035)
We want to see how this works, so we cut the time down for actually
running it by removing tile generation.
* Remove another PR-comment block
* Update file name pattern (#2037)
* Remove ETL from generation (2037)
I looked more carefully, and this ETL step isn't used in the score, so
there's no need to run it every time. Per previous steps, I removed it
from constants so the code is there it won't run by default.
* Score versioning
* adding new zip assets
* tests passing
* debugging
* debugging 2
* get codebook from downloadable path
* upload version file for codebook and shapefile
I did a pretty rough and simple analysis of the variables we put in the
tiles and grepped the frontend code to see if (1) they're ever accessed
and (2) if they're used, even if they're read once. I removed everything
I noticed was not accessed.
* Add "Is a Tribal DAC" field (#1998)
* Add tribal DACs to score N final (#1998)
* Add new fields to downloads (#1998)
* Make a int a float (#1998)
* Update field names, apply feedback (#1998)
* Change TA_PERC, change TA_COUNT (#1988, #1989)
- Make TA_PERC_STR back into a nullable float following the rules
requestsed in #1989
- Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS
that we can fill in later.
* Fix typo comment (#1988)
* Rezip CSV and Excel files with Codebook
* codebook version
* packages fix
* pydantic
* lint
* Remove markdown link from markdown checker (#1936)
Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com>
* Backfill population in island areas (#1882)
* Update smoketest to account for backfills (#1882)
As I wrote in the commend:
We backfill island areas with data from the 2010 census, so if THOSE tracts
have data beyond the data source, that's to be expected and is fine to pass.
If some other state or territory does though, this should fail
This ends up being a nice way of documenting that behavior i guess!
* Fixup lint issues (#1882)
* Add in race demos to 2010 census pull (#1851)
* Add backfill data to score (#1851)
* Change column name (#1851)
* Fill demos after the score (#1851)
* Add income back, adjust test (#1882)
* Apply code-review feedback (#1851)
* Add test for island area backfill (#1851)
* Fix bad rename (#1851)
* should be working, has unnecessary loggers
* removing loggers and cleaning up
* updating ejscreen tests
* adding tests and responding to PR feedback
* fixing broken smoke test
* delete smoketest docs
* Add tribal data to downloads (#1904)
* Update test pickle with current cols (#1904)
* Remove text of tribe names from GeoJSON (#1904)
* Update test data (#1904)
* Add tribal overlap to smoketests (#1904)
* Better document base on Lucas's feedback (#1835)
* Fix typo (#1835)
* Add test to verify GEOJSON matches tiles (#1835)
* Remove NOOP line (#1835)
* Move GEOJSON generation up for new smoketest (#1835)
* Fixup code format (#1835)
* Update readme for new somketest (#1835)
* working notebook
* updating notebook
* wip
* fixing broken tests
* adding tribal overlap files
* WIP
* WIP
* WIP, calculated count and names
* working
* partial cleanup
* partial cleanup
* updating field names
* fixing bug
* removing pyogrio
* removing unused imports
* updating test fixtures to be more realistic
* cleaning up notebook
* fixing black
* fixing flake8 errors
* adding tox instructions
* updating etl_score
* suppressing warning
* Use projected CRSes, ignore geom types (#1900)
I looked into this a bit, and in general the geometry type mismatch
changes very little about the calculation; we have a mix of
multipolygons and polygons. The fastest thing to do is just not keep
geom type; I did some runs with it set to both True and False, and
they're the same within 9 digits of precision. Logically we just want to
overlaps, regardless of how the actual geometries are encoded between
the frames, so we can in this case ignore the geom types and feel OKAY.
I also moved to projected CRSes, since we are actually trying to do area
calculations and so like, we should. Again, the change is small in
magnitude but logically more sound.
* Readd CDC dataset config (#1900)
* adding comments to fips code
* delete unnecessary loggers
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
* Refactor CDC life-expectancy (1554)
* Update to new tract list (#1554)
* Adjust for tests (#1848)
* Add tests for cdc_places (#1848)
* Add EJScreen tests (#1848)
* Add tests for HUD housing (#1848)
* Add tests for GeoCorr (#1848)
* Add persistent poverty tests (#1848)
* Update for sources without zips, for new validation (#1848)
* Update tests for new multi-CSV but (#1848)
Lucas updated the CDC life expectancy data to handle a bug where two
states are missing from the US Overall download. Since virtually none of
our other ETL classes download multiple CSVs directly like this, it
required a pretty invasive new mocking strategy.
* Add basic tests for nature deprived (#1848)
* Add wildfire tests (#1848)
* Add flood risk tests (#1848)
* Add DOT travel tests (#1848)
* Add historic redlining tests (#1848)
* Add tests for ME and WI (#1848)
* Update now that validation exists (#1848)
* Adjust for validation (#1848)
* Add health insurance back to cdc places (#1848)
Ooops
* Update tests with new field (#1848)
* Test for blank tract removal (#1848)
* Add tracts for clipping behavior
* Test clipping and zfill behavior (#1848)
* Fix bad test assumption (#1848)
* Simplify class, add test for tract padding (#1848)
* Fix percentage inversion, update tests (#1848)
Looking through the transformations, I noticed that we were subtracting
a percentage that is usually between 0-100 from 1 instead of 100, and so
were endind up with some surprising results. Confirmed with lucasmbrown-usds
* Add note about first street data (#1848)
This code causes no functional change to the code. It does two things:
1. Uses difference instead of - to improve code style for working with sets.
2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False.