* Rezip CSV and Excel files with Codebook
* codebook version
* packages fix
* pydantic
* lint
* Remove markdown link from markdown checker (#1936)
Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com>
* update be staging gha
* NRI dataset and initial score YAML configuration
* checkpoint
* adding data checks for release branch
* passing tests
* adding INPUT_EXTRACTED_FILE_NAME to base class
* lint
* columns to keep and tests
* update be staging gha
* checkpoint
* update be staging gha
* NRI dataset and initial score YAML configuration
* checkpoint
* adding data checks for release branch
* passing tests
* adding INPUT_EXTRACTED_FILE_NAME to base class
* lint
* columns to keep and tests
* checkpoint
* PR Review
* renoving source url
* tests
* stop execution of ETL if there's a YAML schema issue
* update be staging gha
* adding source url as class var again
* clean up
* force cache bust
* gha cache bust
* dynamically set score vars from YAML
* docsctrings
* removing last updated year - optional reverse percentile
* passing tests
* sort order
* column ordening
* PR review
* class level vars
* Updating DatasetsConfig
* fix pylint errors
* moving metadata hint back to code
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
* starting tribal pr
* further pipeline work
* bia merge working
* alaska villages and tribal geo generate
* tribal folders
* adding data full run
* tile generation
* tribal tile deploy
* installation step
* trigger action
* installing to home dir
* dry-run
* pyenv
* py 2.8
* trying s4cmd
* removing pyenv
* poetry s4cmd
* num-threads
* public read
* poetry cache
* s4cmd all around
* poetry cache
* poetry cache
* install poetry packages
* poetry echo
* let's do this
* s4cmd install on run
* s4cmd
* ad aws back
* add aws back
* testing census api key and poetry caching
* census api key
* census api
* census api key #3
* 250
* poetry update
* poetry change
* check census api key
* force flag
* update score gen and tilefy; remove cached fips
* small gdal update
* invalidation
* missing cache ids
Summary In this PR, we create a new variable so that the % college students is expressed as % not college students. This means that the front end can display % not college students.
Includes old variables so that this will not break fe.
Did some quick, mostly cosmetic changes and updates to the quick launch changes. This mostly entailed changing strings to constants and cleaning up some code to make it neater.
Changes -- PR AMI, updating ag loss, and dropping pr from some threshold counts.
we wanted to implement a slightly different FEMA AG LOSS indicator. Here, we take the 90th percentile only of tracts that have agvalue, and then we also floor the denominator of the rate calculation (loss/total value) at $408k
* WIP on parallelizing
* switching to get_tmp_path for nri
* switching to get_tmp_path everywhere necessary
* fixing linter errors
* moving heavy ETLs to front of line
* add hold
* moving cdc places up
* removing unnecessary print
* moving h&t up
* adding parallel to geo post
* better census labels
* switching to concurrent futures
* fixing output
* Re-export requirements.txt to fix version errors
The version of lxml in this file had a known vulnerability that got
caught by the "safety" checker, but it is updated in the poetry files.
Regenerated using:
https://github.com/usds/justice40-tool/tree/main/data/data-pipeline#miscellaneous
* Fix lint error
* Run lint on all envs and add comments
* Ignore testst that fail lint because of dev deps
* Ignore medium.com in link checker
It's returning 403s to github actions...
* draft wip
* initial commit
* clear output from notebook
* revert to 65ceb7900f
* draft wip
* initial commit
* clear output from notebook
* revert to 65ceb7900f
* make michigan prefix for readable
* standardize Michigan names and move all constants from class into field names module
* standardize Michigan names and move all constants from class into field names module
* include only pertinent columns for scoring comparison tool
* michigan EJSCREEN standardization
* final PR feedback
* added exposition and summary of Michigan EJSCREEN
* added exposition and summary of Michigan EJSCREEN
* fix typo
Co-authored-by: Saran Ahluwalia <ahlusar.ahluwalia@gmail.com>
* Update Side Panel Tile Data
* Update Side Panel Tile Data
* Correct indicator names to match csv
* Replace Score with Rate
* Comment out FEMA Loss Rate to troubleshoot
* Removes all "FEMA Loss Rate" array elements
* Revert FEMA to Score
* Remove expected loss rate
* Remove RMP and NPL from BASIC array
* Attempt to make shape mismatch align
- update README typo
* Add Score L indicators to TILE_SCORE_FLOAT_COLUMNS
* removing cbg references
* completes the ticket
* Update side panel fields
* Update index file writing to create parent dir
* Updates from linting
* fixing missing field_names for island territories 90th percentile fields
* Update downloadable fields and fix field name
* Update file fields and tests
* Update ordering of fields and leave TODO
* Update pickle after re-ordering of file
* fixing bugs in etl_score_geo
* Repeating index for diesel fix
* passing tests
* adding pytest.ini
Co-authored-by: Vim USDS <vimal.k.shah@omb.eop.gov>
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
* Adds four fields:
* Summer days above 90F
* Percent low access to healthy food
* Percent impenetrable surface areas
* Low third grade reading proficiency
* Each of these four gets added into Definition L in various factors.
* Additionally, I add college attendance fields to the ETL for Census ACS.
* This PR also introduces the notion of "reverse percentiles", relevant to ticket #970.
* added fieldnames
* todo pollution, water, health & workforce
* workforce
* work in progress
* add utility function to replace duplicate summation logic
* move fpl series into add columns - run black .
* added revisions - still a wip
* added fieldnames
* todo pollution, water, health & workforce
* workforce
* work in progress
* add utility function to replace duplicate summation logic
* move fpl series into add columns - run black .
* added revisions - still a wip
* revise workforce and water
* revise housing and add incremental counter for workforce
* last PR nit
* revise workforce
* more PR feedback in score l
* more PR feedback in score l
* more PR feedback in score l
* addd FPL_SERIES and update references in score 1
* fix bugs
* reparameterize function
* final revisions in fieldnames
* make computations all consistent so we assing with FPL_200_SERIES
* fieldnames refactor after clarification and PR review
* finalize
* finalize with no typos
* fix length
* added median income var
* swap thresholds
* remove iteration
* remove stray '
* address flake 8
* added f string formatting and fixed typos
* added f string formatting and fixed typos
* move up
* remove dupes
* reformat
* fix bugs
* fix bugs
* initialize
Co-authored-by: Saran Ahluwalia <sarahluw@cisco.com>
* replace temporary fieldnames that are not found and indexed
* fixed field names
* PR review
* PR review - revert
Co-authored-by: Saran Ahluwalia <sarahluw@cisco.com>
This ended up being a pretty large task. Here's what this PR does:
1. Pulls in Vincent's data from island areas into the score ETL. This is from the 2010 decennial census, the last census of any kind in the island areas.
2. Grabs a few new fields from 2010 island areas decennial census.
3. Calculates area median income for island areas.
4. Stops using EJSCREEN as the source of our high school education data and directly pulls that from census (this was related to this project so I went ahead and fixed it).
5. Grabs a bunch of data from the 2010 ACS in the states/Puerto Rico/DC, so that we can create percentiles comparing apples-to-apples (ish) from 2010 island areas decennial census data to 2010 ACS data. This required creating a new class because all the ACS fields are different between 2010 and 2019, so it wasn't as simple as looping over a year parameter.
6. Creates a combined population field of island areas and mainland so we can use those stats in our comparison tool, and updates the comparison tool accordingly.
* per tract collect all diaster total annual expected loss - numerator
* add updated numerators
* EALP columns are missing on tox check - this will ensure only EALP columns that exist are subet on
* EALB columns are missing on tox check - this will ensure only EALP columns that exist are subet on
* reverted to incorporate megatracts
* updated unit tests
* fix tests
* add transform
* remove print statement
* input reflects input from FEMA risks for tracts
* revise tests and update fixtures - clean up tests and main transform function
* added more records
* remove references to Blocks in keyword args in tests
* linting
* addressed latest PR feedback
* remove imports and update arguments to be compatible for 1.1.0
* remove block reference in test
* change precision to 10 digits - refactor tests to accomdate this
Co-authored-by: Saran Ahluwalia <sarahluw@cisco.com>
* Update Census AMI to ETL into tracts, not CBGs
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
* Use tract instead of block group when calling census API
* fixing merge conflicts
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
Update data download URL to use tract as focus, use tract field name,
and move this dataset to the tracts df list in etl_score.
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>