* Update field name to follow constant standard
* Add table to ETL commands to README
* Update Generate Map Tiles run time
* Add a comma to copy
* Add 3 state UI experience
- PR will only show workforce dev
- IA will only show workforce dev w/o linguistic iso
- update tests to tests 3 states
- change state to territory for Island Areas
* Modify PR and IA threshold counts
* Update tile_data_expected.pkl file
* Remove requirements.txt as a dependency
This converts both docker and tox to use poetry, eliminating usage of
requirements.txt in both flows.
- In tox, uses the tox-poetry package which installs dependencies from
the lockfile.
- In docker, uses
https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker
as a reference.
* Don't copy pyproject.toml
* Remove obsoleted docs about requirements.txt
* Add --full-trace option to pytest
* Fix liccheck
liccheck works with requirements.txt, not with poetry, so there needs to
be an extra translation step.
* TEMP: Add WIP fix for pandas issue
This is just to see if the github actions would pass once this fix gets
merged, but it's being reviewed separately.
* Revert "TEMP: Add WIP fix for pandas issue"
This reverts commit 06e38e8cc77f5f3105c6e7a9449901db67aa1c82.
* First pass of updating documentation for new users
Trying to look at this from the perspective of someone new to the
project, and create some pathways to make it easier for people to get to
the content they are looking for.
* Make it clear that docker is doing the setup
* Link installation again from the main README
* Add some docs about the github actions
* Add markdown link check
* Move git installation first
* Add config for markdown link checker
* Fix some links
* Correct handling of repo root relative links
* Fix broken links in data roadmap
* Fix more broken links
* Fix more links
* Ignore link that's returning a 403 to the checker
It actually works if you go in a browser.
* Fix another broken link
* Ignore more urls that don't work
* Update the readme under docs
* Add some more dataset links
* More strongly call out the quickstart
* Try to call out even more the quickstart link
* Fix dead links
* Add note about initialization time
* Remove broken link from spanish install guide
These will be updated later with a full translation
* Update Side Panel Tile Data
* Update Side Panel Tile Data
* Correct indicator names to match csv
* Replace Score with Rate
* Comment out FEMA Loss Rate to troubleshoot
* Removes all "FEMA Loss Rate" array elements
* Revert FEMA to Score
* Remove expected loss rate
* Remove RMP and NPL from BASIC array
* Attempt to make shape mismatch align
- update README typo
* Add Score L indicators to TILE_SCORE_FLOAT_COLUMNS
* removing cbg references
* completes the ticket
* Update side panel fields
* Update index file writing to create parent dir
* Updates from linting
* fixing missing field_names for island territories 90th percentile fields
* Update downloadable fields and fix field name
* Update file fields and tests
* Update ordering of fields and leave TODO
* Update pickle after re-ordering of file
* fixing bugs in etl_score_geo
* Repeating index for diesel fix
* passing tests
* adding pytest.ini
Co-authored-by: Vim USDS <vimal.k.shah@omb.eop.gov>
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
* switching to low
* fixing score-etl-post
* updating comments
* fixing comparison
* create separate field for clarity
* comment fix
* removing healthy food
* fixing bug in score post
* running black and adding comment
* Update pickles and add a helpful notes to README
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Add pytest to tox run in CI/CD
* Try fixing tox dependencies for pytest
* update poetry to get ci/cd passing
* Run poetry export with --dev flag to include dev dependencies such as pytest
* WIP updating test fixtures to include PDF
* Remove dev dependencies from reqs and add pytest to envlist to make build faster
* passing score_post tests
* Add pytest tox (#729)
* Fix failing pytest
* Fixes failing tox tests and updates requirements.txt to include dev deps
* pickle protocol 4
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: Billy Daly <williamdaly422@gmail.com>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
* Fixes#341 -
As a J40 developer, I want to write Unit Tests for the ETL files,
so that tests are run on each commit
* Location bug
* Adding Load tests
* Fixing XLSX filename
* Adding downloadable zip test
* updating pickle
* Fixing pylint warnings
* Updte readme to correct some typos and reorganize test content structure
* Removing unused schemas file, adding details to readme around pickles, per PR feedback
* Update test to pass with Score D added to score file; update path in readme
* fix requirements.txt after merge
* fix poetry.lock after merge
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* WIP refactor
* Exract score calculations into their own methods
* do all initial df prep in single method
* Fix error in docs for running etl for single dataset
* WIP understanding HUD and linguistic iso data
* Add comments from initial group review on PR
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Initial draft for data provenance
We want to make the data usable/available at every step of our data
pipeline. This starts te addition to the README that spells out the data
provenance and where each version of the data as it goes through our
pipeline lives.
* Update README with placeholders for next steps in data provenance
* Add coming soon placeholders for remaining data locations
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Fixes#456 - Our data directory should adopt standard python package structure
* a few missed references
* updating readme
* updating requirements
* Running Black
* Fixes for flake8
* updating pylint
* Minor documentation updates, plus calenvironscreen S3 URL fix
* Update score comparison docs and code
* Add steps for running the comparison tool
* Update HUD recap ETL to ensure GEOID is imported as a string (if it is
imported as an interger by default it will strip the beginning "0" from
many IDs)
* Add note about execution time
* Move step from paragraph to list
* Update output dir in README for comp tool
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* initial checkin
* gitignore and docker-compose update
* readme update and error on hud
* encoding issue
* one more small README change
* data roadmap re-strcuture
* pyproject sort
* small update to score output folders
* checkpoint
* couple of last fixes