* WIP on parallelizing
* switching to get_tmp_path for nri
* switching to get_tmp_path everywhere necessary
* fixing linter errors
* moving heavy ETLs to front of line
* add hold
* moving cdc places up
* removing unnecessary print
* moving h&t up
* adding parallel to geo post
* better census labels
* switching to concurrent futures
* fixing output
This updates the backend to produce tile data with island indicators / island fields.
Contains:
- new tile codes for island data
- threshold column that specifies number of thresholds to show
- ui experience column that specifies which ui experience to show
TODO: Drop the logger info message from main :)
When implementing definition M for the score, the variable names were not yet updated. For example:
This legacy field naming:
```
UNEMPLOYMENT_LOW_HS_EDUCATION_FIELD = (
f"Greater than or equal to the {PERCENTILE}th percentile for unemployment"
" and has low HS education"
)
```
Should actually be renamed something like this:
```
UNEMPLOYMENT_LOW_HS_LOW_HIGHER_ED_FIELD = (
f"Greater than or equal to the {PERCENTILE}th percentile for unemployment"
" and has low HS education and low higher ed attendance"
)
```
This PR is for the backend updates for this -- keeping the old fields, and adding new, Score M specific fields as listed below:
- [x] `field_names`: add new fields to capture low_higher_ed
- [x] `score_m`: replace old fields with new fields
- [x] `DOWNLOADABLE_SCORE_COLUMNS`: replace old fields with new fields
- [x] `TILES_SCORE_COLUMNS`: replace old fields with new fields
* updated loss rate rounding
* fixing a typo in variable name
* fixing typo in variable name
* oops, now ready to push
* updated pickle with float for loss rate columns
* updated a typo, now multiplies all loss rates by 100 consistent with other pcts
* updated with final pickles, all tests passing
* updated incorporating lucas pr comments
* changed literal to field name
* Install and run pandas-vet
This doesn't fix the errors, but it can give us a starting point for the
discussion of which of these errors we care about.
* Ignore the errors for now
* Ignore eeoc.gov in link checker
Sometimes it seems down from the perspective of github actions.
* Add pyproject.toml to fix docker compose build
Even though we want to use locked dependencies, pyproject.toml is still
required.
* update Dockerfile
Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com>
* Remove requirements.txt as a dependency
This converts both docker and tox to use poetry, eliminating usage of
requirements.txt in both flows.
- In tox, uses the tox-poetry package which installs dependencies from
the lockfile.
- In docker, uses
https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker
as a reference.
* Don't copy pyproject.toml
* Remove obsoleted docs about requirements.txt
* Add --full-trace option to pytest
* Fix liccheck
liccheck works with requirements.txt, not with poetry, so there needs to
be an extra translation step.
* TEMP: Add WIP fix for pandas issue
This is just to see if the github actions would pass once this fix gets
merged, but it's being reviewed separately.
* Revert "TEMP: Add WIP fix for pandas issue"
This reverts commit 06e38e8cc77f5f3105c6e7a9449901db67aa1c82.
* wip - added tests - 1 failing
* added check for empty series + added test
* passing tests
* parallelism in variable assingnment choice
* resolve merge conflicts
* variable name changes
* cleanup logic and move comments out of main code execution + add one more test for an extreme example eith -np.inf
* cleanup logic and move comments out of main code execution + add one more test for an extreme example eith -np.inf
* revisions to handle type ambiguity
* fixing tests
* fix pytest
* fix linting
* fix pytest
* reword comments
* cleanup comments
* cleanup comments - fix typo
* added type check and corresponding test
* added type check and corresponding test
* language cleanup
* revert
* update picke fixture
Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov>
* Re-export requirements.txt to fix version errors
The version of lxml in this file had a known vulnerability that got
caught by the "safety" checker, but it is updated in the poetry files.
Regenerated using:
https://github.com/usds/justice40-tool/tree/main/data/data-pipeline#miscellaneous
* Fix lint error
* Run lint on all envs and add comments
* Ignore testst that fail lint because of dev deps
* Ignore medium.com in link checker
It's returning 403s to github actions...
* draft wip
* initial commit
* clear output from notebook
* revert to 65ceb7900f
* draft wip
* initial commit
* clear output from notebook
* revert to 65ceb7900f
* make michigan prefix for readable
* standardize Michigan names and move all constants from class into field names module
* standardize Michigan names and move all constants from class into field names module
* include only pertinent columns for scoring comparison tool
* michigan EJSCREEN standardization
* final PR feedback
* added exposition and summary of Michigan EJSCREEN
* added exposition and summary of Michigan EJSCREEN
* fix typo
Co-authored-by: Saran Ahluwalia <ahlusar.ahluwalia@gmail.com>
* First pass of updating documentation for new users
Trying to look at this from the perspective of someone new to the
project, and create some pathways to make it easier for people to get to
the content they are looking for.
* Make it clear that docker is doing the setup
* Link installation again from the main README
* Add some docs about the github actions
* Add markdown link check
* Move git installation first
* Add config for markdown link checker
* Fix some links
* Correct handling of repo root relative links
* Fix broken links in data roadmap
* Fix more broken links
* Fix more links
* Ignore link that's returning a 403 to the checker
It actually works if you go in a browser.
* Fix another broken link
* Ignore more urls that don't work
* Update the readme under docs
* Add some more dataset links
* More strongly call out the quickstart
* Try to call out even more the quickstart link
* Fix dead links
* Add note about initialization time
* Remove broken link from spanish install guide
These will be updated later with a full translation
* Update Side Panel Tile Data
* Update Side Panel Tile Data
* Correct indicator names to match csv
* Replace Score with Rate
* Comment out FEMA Loss Rate to troubleshoot
* Removes all "FEMA Loss Rate" array elements
* Revert FEMA to Score
* Remove expected loss rate
* Remove RMP and NPL from BASIC array
* Attempt to make shape mismatch align
- update README typo
* Add Score L indicators to TILE_SCORE_FLOAT_COLUMNS
* removing cbg references
* completes the ticket
* Update side panel fields
* Update index file writing to create parent dir
* Updates from linting
* fixing missing field_names for island territories 90th percentile fields
* Update downloadable fields and fix field name
* Update file fields and tests
* Update ordering of fields and leave TODO
* Update pickle after re-ordering of file
* fixing bugs in etl_score_geo
* Repeating index for diesel fix
* passing tests
* adding pytest.ini
Co-authored-by: Vim USDS <vimal.k.shah@omb.eop.gov>
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
* switching to low
* fixing score-etl-post
* updating comments
* fixing comparison
* create separate field for clarity
* comment fix
* removing healthy food
* fixing bug in score post
* running black and adding comment
* Update pickles and add a helpful notes to README
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
* Adds four fields:
* Summer days above 90F
* Percent low access to healthy food
* Percent impenetrable surface areas
* Low third grade reading proficiency
* Each of these four gets added into Definition L in various factors.
* Additionally, I add college attendance fields to the ETL for Census ACS.
* This PR also introduces the notion of "reverse percentiles", relevant to ticket #970.
* Update etl constants to use score field_names
Put strings around tract IDs in downloadable CSV
No need to modify the xls file creation because the string type is
preserved and interpreted correctly in Excel already.
One note is that this does cause the ID in the CSV to be have quotes
around it, which might be annoying. Maybe we don't want this behavior?
* Update based on PR feedback and lint needs
* Change field we're using in downloadable
This reverts the downloadable csv field list to use
MEDIAN_INCOME_AS_PERCENT_OF_STATE_FIELD instead of
MEDIAN_INCOME_AS_PERCENT_OF_AMI_FIELD in order to get the test to pass.
The point of this PR is a refactor (and a small change to the CSV
quotations), not to change the output. That will be a different PR
later.
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>