* Rezip CSV and Excel files with Codebook
* codebook version
* packages fix
* pydantic
* lint
* Remove markdown link from markdown checker (#1936)
Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com>
* installation step
* trigger action
* installing to home dir
* dry-run
* pyenv
* py 2.8
* trying s4cmd
* removing pyenv
* poetry s4cmd
* num-threads
* public read
* poetry cache
* s4cmd all around
* poetry cache
* poetry cache
* install poetry packages
* poetry echo
* let's do this
* s4cmd install on run
* s4cmd
* ad aws back
* add aws back
* testing census api key and poetry caching
* census api key
* census api
* census api key #3
* 250
* poetry update
* poetry change
* check census api key
* force flag
* update score gen and tilefy; remove cached fips
* small gdal update
* invalidation
* missing cache ids
In order to solve an issue where states with few census tracts appear to have no DACs, we change the low-zoom for states with under some threshold of tracts to be the high-zoom for those states. Thus, WY now has DACs even in low zoom. Yay!
Did some quick, mostly cosmetic changes and updates to the quick launch changes. This mostly entailed changing strings to constants and cleaning up some code to make it neater.
Changes -- PR AMI, updating ag loss, and dropping pr from some threshold counts.
* Install and run pandas-vet
This doesn't fix the errors, but it can give us a starting point for the
discussion of which of these errors we care about.
* Ignore the errors for now
* Ignore eeoc.gov in link checker
Sometimes it seems down from the perspective of github actions.
* Remove requirements.txt as a dependency
This converts both docker and tox to use poetry, eliminating usage of
requirements.txt in both flows.
- In tox, uses the tox-poetry package which installs dependencies from
the lockfile.
- In docker, uses
https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker
as a reference.
* Don't copy pyproject.toml
* Remove obsoleted docs about requirements.txt
* Add --full-trace option to pytest
* Fix liccheck
liccheck works with requirements.txt, not with poetry, so there needs to
be an extra translation step.
* TEMP: Add WIP fix for pandas issue
This is just to see if the github actions would pass once this fix gets
merged, but it's being reviewed separately.
* Revert "TEMP: Add WIP fix for pandas issue"
This reverts commit 06e38e8cc77f5f3105c6e7a9449901db67aa1c82.
* Add pytest to tox run in CI/CD
* Try fixing tox dependencies for pytest
* update poetry to get ci/cd passing
* Run poetry export with --dev flag to include dev dependencies such as pytest
* WIP updating test fixtures to include PDF
* Remove dev dependencies from reqs and add pytest to envlist to make build faster
* passing score_post tests
* Add pytest tox (#729)
* Fix failing pytest
* Fixes failing tox tests and updates requirements.txt to include dev deps
* pickle protocol 4
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
Co-authored-by: Billy Daly <williamdaly422@gmail.com>
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
* Fixes#341 -
As a J40 developer, I want to write Unit Tests for the ETL files,
so that tests are run on each commit
* Location bug
* Adding Load tests
* Fixing XLSX filename
* Adding downloadable zip test
* updating pickle
* Fixing pylint warnings
* Updte readme to correct some typos and reorganize test content structure
* Removing unused schemas file, adding details to readme around pickles, per PR feedback
* Update test to pass with Score D added to score file; update path in readme
* fix requirements.txt after merge
* fix poetry.lock after merge
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Fixes#303 : adding downloadable zip archive logic
* linter recommendations
* Pushes data directory to AWS. We'll want to move to use AWS for this ASAP, but this works for now
* updating pattern
* Fixes#456 - Our data directory should adopt standard python package structure
* a few missed references
* updating readme
* updating requirements
* Running Black
* Fixes for flake8
* updating pylint
* Adds flake8, pylint, liccheck, flake8 to dependencies for data-pipeline
* Sets up and runs black autoformatting
* Adds flake8 to tox linting
* Fixes flake8 error F541 f string missing placeholders
* Fixes flake8 E501 line too long
* Fixes flake8 F401 imported but not used
* Adds pylint to tox and disables the following pylint errors:
- C0114: module docstrings
- R0201: method could have been a function
- R0903: too few public methods
- C0103: name case styling
- W0511: fix me
- W1203: f-string interpolation in logging
* Adds utils.py to tox.ini linting, runs black on utils.py
* Fixes import related pylint errors: C0411 and C0412
* Fixes or ignores remaining pylint errors (for discussion later)
* Adds safety and liccheck to tox.ini
* Adds tox as a dev dependency to data/data-pipeline/pyproject.toml: Also updates poetry.lock and requirements.txt
* Adds tox.ini to test build of data/data-pipeline
* Sets up GitHub actions workflow for data/ directory
* Tries to get Data Checks GitHub action to run
* Fixes error with GitHub action
* Migrates data/data-roadmap from setuptools to poetry
* Sets up tox file for data/data-roadmap
* Adds github action for data/data-roadmap
* Fixes syntax error in data-checks.yml
* Second attempt at fixing data-checks.yml
* Export poetry requirements to requirements.txt
* Revert "Migrates data/data-roadmap from setuptools to poetry"
This reverts commit e8367652d43c1c9beee500f792c8f41e1c1fc462.
* Removes pyproject.toml and reverts requirements.txt as well
* initial checkin
* gitignore and docker-compose update
* readme update and error on hud
* encoding issue
* one more small README change
* data roadmap re-strcuture
* pyproject sort
* small update to score output folders
* checkpoint
* couple of last fixes