* temp update
* updating with fips check
* adding check on pfs
* updating with pfs test
* Update test_tiles_smoketests.py
* Fix lint errors (#1848)
* Add column names test (#1848)
* Mark tests as smoketests (#1848)
* Move to other score-related tests (#1848)
* Recast Total threshold criteria exceeded to int (#1848)
In writing tests to verify the output of the tiles csv matches the final
score CSV, I noticed TC/Total threshold criteria exceeded was getting
cast from an int64 to a float64 in the process of PostScoreETL. I
tracked it down to the line where we merge the score dataframe with
constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the
national census CSV that don't exist in the score, so those ended up
with a Total threshhold count of np.nan, which is a float, and thereby
cast those columns to float. For the moment I just cast it back.
* No need for low memeory (#1848)
* Add additional tests of tiles.csv (#1848)
* Drop pre-2010 rows before computing score (#1848)
Note this is probably NOT the optimal place for this change; it might
make more sense for each source to filter its own tracts down to the
acceptable tract list. However, that would be a pretty invasive change,
where this is central and plenty of other things are happening in score
transform that could be moved to sources, so for today, here's where the
change will live.
* Fix typo (#1848)
* Switch from filter to inner join (#1848)
* Remove no-op lines from tiles (#1848)
* Apply feedback from review, linter (#1848)
* Check the values oeverything in the frame (#1848)
* Refactor checker class (#1848)
* Add test for state names (#1848)
* cleanup from reviewing my own code (#1848)
* Fix lint error (#1858)
* Apply Emma's feedback from review (#1848)
* Remove refs to national_df (#1848)
* Account for new, fake nullable bools in tiles (#1848)
To handle a geojson limitation, Emma converted some nullable boolean
colunms to float64 in the tiles export with the values {0.0, 1.0, nan},
giving us the same expressiveness. Sadly, this broke my assumption that
all columns between the score and tiles csvs would have the same dtypes,
so I need to account for these new, fake bools in my test.
* Use equals instead of my worse version (#1848)
* Missed a spot where we called _create_score_data (#1848)
* Update per safety (#1848)
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
* Add spatial join method (#1871)
Since we'll need to figure out the tracts for a large number of points
in future tickets, add a utility to handle grabbing the tract geometries
and adding tract data to a point dataset.
* Add FUDS, also jupyter lab (#1871)
* Add YAML configs for FUDS (#1871)
* Allow input geoid to be optional (#1871)
* Add FUDS ETL, tests, test-datae noteobook (#1871)
This adds the ETL class for Formerly Used Defense Sites (FUDS). This is
different from most other ETLs since these FUDS are not provided by
tract, but instead by geographic point, so we need to assign FUDS to
tracts and then do calculations from there.
* Floats -> Ints, as I intended (#1871)
* Floats -> Ints, as I intended (#1871)
* Formatting fixes (#1871)
* Add test false positive GEOIDs (#1871)
* Add gdal binaries (#1871)
* Refactor pandas code to be more idiomatic (#1871)
Per Emma, the more pandas-y way of doing my counts is using np.where to
add the values i need, then groupby and size. It is definitely more
compact, and also I think more correct!
* Update configs per Emma suggestions (#1871)
* Type fixed! (#1871)
* Remove spurious import from vscode (#1871)
* Snapshot update after changing col name (#1871)
* Move up GDAL (#1871)
* Adjust geojson strategy (#1871)
* Try running census separately first (#1871)
* Fix import order (#1871)
* Cleanup cache strategy (#1871)
* Download census data from S3 instead of re-calculating (#1871)
* Clarify pandas code per Emma (#1871)
Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here.
* installation step
* trigger action
* installing to home dir
* dry-run
* pyenv
* py 2.8
* trying s4cmd
* removing pyenv
* poetry s4cmd
* num-threads
* public read
* poetry cache
* s4cmd all around
* poetry cache
* poetry cache
* install poetry packages
* poetry echo
* let's do this
* s4cmd install on run
* s4cmd
* ad aws back
* add aws back
* testing census api key and poetry caching
* census api key
* census api
* census api key #3
* 250
* poetry update
* poetry change
* check census api key
* force flag
* update score gen and tilefy; remove cached fips
* small gdal update
* invalidation
* missing cache ids
In order to solve an issue where states with few census tracts appear to have no DACs, we change the low-zoom for states with under some threshold of tracts to be the high-zoom for those states. Thus, WY now has DACs even in low zoom. Yay!
Did some quick, mostly cosmetic changes and updates to the quick launch changes. This mostly entailed changing strings to constants and cleaning up some code to make it neater.
Changes -- PR AMI, updating ag loss, and dropping pr from some threshold counts.
* Install and run pandas-vet
This doesn't fix the errors, but it can give us a starting point for the
discussion of which of these errors we care about.
* Ignore the errors for now
* Ignore eeoc.gov in link checker
Sometimes it seems down from the perspective of github actions.
* Remove requirements.txt as a dependency
This converts both docker and tox to use poetry, eliminating usage of
requirements.txt in both flows.
- In tox, uses the tox-poetry package which installs dependencies from
the lockfile.
- In docker, uses
https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker
as a reference.
* Don't copy pyproject.toml
* Remove obsoleted docs about requirements.txt
* Add --full-trace option to pytest
* Fix liccheck
liccheck works with requirements.txt, not with poetry, so there needs to
be an extra translation step.
* TEMP: Add WIP fix for pandas issue
This is just to see if the github actions would pass once this fix gets
merged, but it's being reviewed separately.
* Revert "TEMP: Add WIP fix for pandas issue"
This reverts commit 06e38e8cc77f5f3105c6e7a9449901db67aa1c82.
* Add pytest to tox run in CI/CD
* Try fixing tox dependencies for pytest
* update poetry to get ci/cd passing
* Run poetry export with --dev flag to include dev dependencies such as pytest
* WIP updating test fixtures to include PDF
* Remove dev dependencies from reqs and add pytest to envlist to make build faster
* passing score_post tests
* Add pytest tox (#729)
* Fix failing pytest
* Fixes failing tox tests and updates requirements.txt to include dev deps
* pickle protocol 4
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: Billy Daly <williamdaly422@gmail.com>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
* Fixes#341 -
As a J40 developer, I want to write Unit Tests for the ETL files,
so that tests are run on each commit
* Location bug
* Adding Load tests
* Fixing XLSX filename
* Adding downloadable zip test
* updating pickle
* Fixing pylint warnings
* Updte readme to correct some typos and reorganize test content structure
* Removing unused schemas file, adding details to readme around pickles, per PR feedback
* Update test to pass with Score D added to score file; update path in readme
* fix requirements.txt after merge
* fix poetry.lock after merge
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Fixes#303 : adding downloadable zip archive logic
* linter recommendations
* Pushes data directory to AWS. We'll want to move to use AWS for this ASAP, but this works for now
* updating pattern
* Fixes#456 - Our data directory should adopt standard python package structure
* a few missed references
* updating readme
* updating requirements
* Running Black
* Fixes for flake8
* updating pylint
* Adds flake8, pylint, liccheck, flake8 to dependencies for data-pipeline
* Sets up and runs black autoformatting
* Adds flake8 to tox linting
* Fixes flake8 error F541 f string missing placeholders
* Fixes flake8 E501 line too long
* Fixes flake8 F401 imported but not used
* Adds pylint to tox and disables the following pylint errors:
- C0114: module docstrings
- R0201: method could have been a function
- R0903: too few public methods
- C0103: name case styling
- W0511: fix me
- W1203: f-string interpolation in logging
* Adds utils.py to tox.ini linting, runs black on utils.py
* Fixes import related pylint errors: C0411 and C0412
* Fixes or ignores remaining pylint errors (for discussion later)
* Adds safety and liccheck to tox.ini
* Adds tox as a dev dependency to data/data-pipeline/pyproject.toml: Also updates poetry.lock and requirements.txt
* Adds tox.ini to test build of data/data-pipeline
* Sets up GitHub actions workflow for data/ directory
* Tries to get Data Checks GitHub action to run
* Fixes error with GitHub action
* Migrates data/data-roadmap from setuptools to poetry
* Sets up tox file for data/data-roadmap
* Adds github action for data/data-roadmap
* Fixes syntax error in data-checks.yml
* Second attempt at fixing data-checks.yml
* Export poetry requirements to requirements.txt
* Revert "Migrates data/data-roadmap from setuptools to poetry"
This reverts commit e8367652d43c1c9beee500f792c8f41e1c1fc462.
* Removes pyproject.toml and reverts requirements.txt as well
* initial checkin
* gitignore and docker-compose update
* readme update and error on hud
* encoding issue
* one more small README change
* data roadmap re-strcuture
* pyproject sort
* small update to score output folders
* checkpoint
* couple of last fixes