j40-cejst-2/data/data-pipeline/README.md

377 lines
30 KiB
Markdown
Raw Normal View History

# Justice40 Data Pipeline and Scoring Application
## Table of Contents
- [About](#about)
- [Accessing Data](#accessing-data)
- [Installing the Data Pipeline and Scoring Application](#installing-the-data-pipeline-and-scoring-application)
- [Running the Data Pipeline and Scoring Application](#running-the-data-pipeline-and-scoring-application)
- [How Scoring Works](#how-scoring-works)
- [Comparing Scores](#comparing-scores)
- [Testing](#testing)
## About
The Justice40 Data Pipeline and Scoring Application is used to retrieve input data sources, perform Extract-Transform-Load (ETL) operations on those data sources, and ultimately generate the scores and supporting data (e.g. map tiles) consumed by the [Climate and Economic Justice Screening Tool (CEJST) website](https://screeningtool.geoplatform.gov/). This data can also be used to compare experimental versions of the Justice40 score to established environmental justice indices, such as EJSCREEN and CalEnviroScreen.
> :exclamation: **ATTENTION**
> The Council on Environmental Quality (CEQ) [made version 1.0 of the CEJST available in November 2022](https://www.whitehouse.gov/ceq/news-updates/2022/11/22/biden-harris-administration-launches-version-1-0-of-climate-and-economic-justice-screening-tool-key-step-in-implementing-president-bidens-justice40-initiative/). Future versions are in continuous development, and scores are likely to change over time. Only versions made publicly available via the CEJST by CEQ may be used for the Justice40 Initiative.
We believe that the entire data pipeline should be open and replicable end-to-end. As part of this, in addition to all code being open, we also strive to make data visible and available for use at every stage of our pipeline. You can follow the installation instructions below to spin up the data pipeline yourself in your own environment; you can also access the data we've already processed.
## Accessing Data
If you wish to access our data without running the Justice40 Data Pipeline and Scoring Application locally, you can do so using the following links.
| Dataset | Location |
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------- |
| Source Data | You can find the source URLs in the `etl.py` files located within each directory in `data/data-pipeline/etl/sources` |
| Version 1.0 Combined Datasets (from all Sources) | [Download](https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/csv/full/usa.csv) |
| Shape Files for Mapping Applications | [Download](https://static-data-screeningtool.geoplatform.gov/data-versions/1.0/data/score/downloadable/1.0-shapefile-codebook.zip) |
| Documentation and Other Downloads | [Climate and Economic Justice Screening Tool Downloads](https://screeningtool.geoplatform.gov/en/downloads) |
## Installing the Data Pipeline and Scoring Application
If you wish to run the Justice40 Data Pipeline and Scoring Application in your own environment, you have the option of using Docker or setting up a local environment. Docker allows you to install and run the application inside a container without setting up a local environment, and is the quickest and easiest option. A local environment requires you to set up your system manually, but provides the ability to make changes and run individual parts of the application without the need for Docker.
With either choice, you'll first need to perform some installation steps.
### Installing Docker
To install Docker, follow these [instructions](https://docs.docker.com/get-docker/). After installation is complete, visit [Running with Docker](#running-with-docker) for more information.
---
### Installing Your Local Environment
The detailed steps for performing [local environment installation can be found in our guide](INSTALLATION.md). After installation is complete, visit [Running the Application Locally](#running-in-your-local-environment) for more information.
## Running the Data Pipeline and Scoring Application
2025-01-06 10:41:37 -08:00
The Justice40 Data Pipeline and Scoring Application is a multi-step process that,
1. Retrieves input data sources (extract), standardizes those input data sources' data into an intermediate format (transform), and saves the results to the file system (load). It performs those steps for each configured input data source (found at [`data_pipeline/etl/sources`](data_pipeline/etl/sources))
2. Calculates a score
3. Combines the score with geographic data
4. Generates map tiles for use in the client website
```mermaid
graph LR
A[Run ETL on all External\nData Sources] --> B[Calculate Score]
B --> C[Combine Score with\nGeographic Data]
C --> D[Generate Map Tiles]
```
You can perform these steps either using Docker or by running the application in your local environment.
### Running with Docker
Docker can be used to run the application inside a container without setting up a local environment.
> :exclamation: **ATTENTION**
> You must increase the memory resource of your container to at least 8096 MB to run this application in Docker
Before running with Docker, you must build the Docker container. Make sure you're in the root directory of the repository (`/justice40-tool`) and run `docker-compose build --no-cache`.
Once you've built the Docker container, run `docker-compose up`. Docker will spin up 3 containers: the client container, the static server container and the data container. Once all data is generated, you can see the application by navigating to [http://localhost:8000](http://localhost:8000) in your browser.
<details>
<summary>View additional commands</summary>
If you want to run specific data tasks, you can open a terminal window, navigate to the root folder for this repository, and execute any command for the application using this format:
`docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application [command]`
- Get help: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application --help`
- Generate census data: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application census-data-download`
- Run all ETL and Generate score: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application score-full-run`
- Clean up the data directories: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application data-cleanup`
- Run all ETL processes: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application etl-run`
- Generate Score: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application score-run`
- Combine Score with Geojson and generate high and low zoom map tile sets: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application geo-score`
- Generate Map Tiles: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application generate-map-tiles`
To learn more about these commands and when they should be run, refer to [Running for Local Development](#running-for-local-development).
</details>
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
---
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
### Running in Your Local Environment
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
When running in your local environment, each step of the application can be run individually or as a group.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
> :bulb: **NOTE**
> This section only describes the steps necessary to run the Justice40 Data Pipeline and Scoring Application. If you'd like to run the client application, visit the [client README](/client/README.md). Please note that the client application does not use the data locally generated by the application by default.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
Start by familiarizing yourself with the available commands. To do this, navigate to `justice40-tool/data/data-pipeline` and run `poetry run python3 data_pipeline/application.py --help`. You'll see a list of commands and what those commands do. You can also request help on any individual command to get more information about command options (e.g. `poetry run python3 data_pipeline/application.py etl-run --help`).
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
> :exclamation: **ATTENTION**
> Some commands fetch large amounts of data from remote data sources or run resource-intensive calculations. They may take a long time to complete (e.g. generate-map-tiles can take over 30 minutes). Those that fetch data from remote data sources (e.g. `etl-run`) should not be run too often; if they are, you may get throttled or eventually blocked by the sites serving the data.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
#### Download Census Data
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
Begin the process of running the application in your local environment by downloading census data.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
> :bulb: **NOTE**
> You'll only need to do this once (unless you clean your census data folder)! Subsequent runs will use the data you've already downloaded.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
To download census data, run the command `poetry run python3 data_pipeline/application.py census-data-download`.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
If you have a high speed internet connection and don't want to generate the census data locally, you can download [a zip version of the Census file](https://justice40-data.s3.amazonaws.com/data-sources/census.zip). Unzip and move the contents inside the `data/data-pipeline/data_pipeline/data/census` folder.
#### Run the Application
Running the application in your local environment allows the most flexibility. You can pick and choose which commands you run, and test parts of the application individually or as a group. While we can't anticipate all of your individual development scenarios, we can give you the steps you'll need to run the application from start to finish.
Once you've downloaded the census data, run the following commands in order to exercise the entire Data Pipeline and Scoring Application. The commands can be run from `justice40-tool/data/data-pipeline` in the form `poetry run python3 data_pipeline/application.py insert-name-of-command-here`.
| Step | Command | Description | Example Output |
| ---- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- |
| 1 | `etl-run` | Performs the ETL steps on all external data sources, and saves the resulting intermediate file | `data/dataset` |
| 2 | `score-run` | Generates and stores the score | `data/score/csv/full/usa.csv` |
| 3 | `generate-score-post` | Performs a host of post-score activities, including adding county and state data to the score, shortening column names, and generating a downloadable package of data | `data/score/csv/tiles/usa.csv`, `data/score/downloadable` |
| 4 | `geo-score` | Merges geoJSON data with score data, creating both high and low resolution results | `data/score/geojson[usa high or low]` |
| 5 | `generate-map-tiles` | Generates map tiles for use in client website | `data/score/tiles/ high or low / {zoomLevel}` |
Many commands have options. For example, you can run a single dataset with `etl-run` by passing the command line parameter `-d name-of-dataset-to-run`. Please use the `--help` option to find out more.
> :bulb: **NOTE**
> One important command line option is enabling cached data sources. Pass the command line parameter `-u` to many commands (e.g. `etl-run`) to use locally cached data sources within the ETL portion of the pipeline. This will ensure that you don't download many GB of data with each run of the data pipeline.
## How Scoring Works
Scores are generated by running the `score-run` command via Poetry or Docker. This command executes [`data_pipeline/etl/score/etl_score.py`](data_pipeline/etl/score/etl_score.py). During execution,
1. Source files from the [`data_pipeline/data/dataset`](data_pipeline/data/dataset) directory are loaded into memory (these source files were generated by the `etl-run` command)
2. These data sets are merged into a single dataframe using their Census Block Group GEOID as a common key, and the data in each of the columns is standardized in two ways:
- Their [percentile rank](https://en.wikipedia.org/wiki/Percentile_rank) is calculated, which tells us what percentage of other Census Block Groups have a lower value for that particular column.
- They are normalized using [min-max normalization](https://en.wikipedia.org/wiki/Feature_scaling), which adjusts the scale of the data so that the Census Block Group with the highest value for that column is set to 1, the Census Block Group with the lowest value is set to 0, and all of the other values are adjusted to fit within that range based on how close they were to the highest or lowest value.
3. The standardized columns are then used to calculate each of the Justice40 scores, and the results are exported to `data_pipeline/data/score/csv/full/usa.csv`. Different versions of the scoring algorithm including the current version can be found in [`data_pipeline/score`](data_pipeline/score).
## Comparing Scores
Scores can be compared to both internally calculated scores and scores calculated by other existing indices.
### Internal Comparison
Locally calculated scores can be easily compared with the score in production by using the [Score Comparator](data_pipeline/comparator.py). The score comparator compares the number and name of the columns, the number of census tracts (rows), and the score values (if the columns and census tracts line up).
The Score Comparator runs on every Github Pull Request, but can be run manually by `poetry run python3 data_pipeline/comparator.py compare-score` from the `justice40-tool/data/data-pipeline` directory.
### External Comparison
We are building a comparison tool to enable easy (or at least straightforward) comparison of the Justice40 score with other existing indices. The goal of having this is so that as we experiment and iterate with a scoring methodology, we can understand how our score overlaps with or differs from other indices that communities, nonprofits, and governments use to inform decision making.
Right now, our comparison tool exists simply as a python notebook in `data/data-pipeline/data_pipeline/ipython/scoring_comparison.ipynb`.
To run this comparison tool:
1. Make sure you've gone through the above steps to run the data ETL and score generation.
1. From the package directory (`data/data-pipeline/data_pipeline/`), navigate to the `ipython` directory.
1. Ensure you have `pandoc` installed on your computer. If you're on a Mac, run `brew install pandoc`; for other OSes, see pandoc's [installation guide](https://pandoc.org/installing.html).
1. Start the notebooks: `jupyter notebook`
1. In your browser, navigate to one of the URLs returned by the above command.
1. Select `scoring_comparison.ipynb` from the options in your browser.
1. Run through the steps in the notebook. You can step through them one at a time by clicking the "Run" button for each cell, or open the "Cell" menu and click "Run all" to run them all at once.
1. Reports and spreadsheets generated by the comparison tool will be available in `data/data-pipeline/data_pipeline/data/comparison_outputs`.
> :exclamation: **ATTENTION**
> This may take over an hour to fully execute and generate the reports.
## Testing
### Background
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
<!-- markdown-link-check-disable -->
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
For this project, we make use of [pytest](https://docs.pytest.org/en/latest/) for testing purposes.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
<!-- markdown-link-check-enable-->
To run tests, simply run `poetry run pytest` in this directory (`justice40-tool/data/data-pipeline`).
Test data is configured via [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html).
### Running the Full Suite
Our _full_ test and check suite including security and code format checks is configured using [`tox`](tox.ini). This suite can be run using the command `poetry run tox` from the `justice40-tool/data/data-pipeline` directory.
Each run takes a while to build the environment from scratch. If you'd like to save time, you can use the previously built environment by running `poetry run tox -e lint`.
### Score and Post-Processing Tests
The fixtures used in the score post-processing tests are slightly different. These fixtures use [pickle files](https://docs.python.org/3/library/pickle.html) to store dataframes to disk. This is ultimately because if you assert equality on two dataframes, even if column values have the same _visible_ value, if their types are mismatching they will be counted as not being equal.
In a bit more detail:
1. Pandas dataframes are typed, and by default, types are inferred when you create one from scratch. If you create a dataframe using the `DataFrame` [constructors](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html#pandas.DataFrame), there is no guarantee that types will be correct, without explicit `dtype` annotations. Explicit `dtype` annotations are possible, but, and this leads us to point #2:
2. Our transformations/dataframes in the source code under test itself doesn't always require specific types, and it is often sufficient in the code itself to just rely on the `object` type. I attempted adding explicit typing based on the "logical" type of given columns, but in practice it resulted in non-matching dataframes that _actually_ had the same value in particular it was very common to have one dataframe column of type `string` and another of type `object` that carried the same values. So, that is to say, even if we did create a "correctly" typed dataframe (according to our logical assumptions about what types should be), they were still counted as mismatched against the dataframes that are actually used in our program. To fix this "the right way", it is necessary to explicitly annotate types at the point of the `read_csv` call, which definitely has other potential unintended side effects and would need to be done carefully.
3. For larger dataframes (some of these have 150+ values), it was initially deemed too difficult/time consuming to manually annotate all types, and further, to modify those type annotations based on what is expected in the soucre code under test.
#### Updating Pickles
If you update the score in any way, it is necessary to create new pickles so that data is validated correctly.
It starts with the `data_pipeline/etl/score/tests/sample_data/score_data_initial.csv`, which is the first two rows of the `score/full/usa.csv`.
To update this file, run a full score generation, then open a Python shell from the `data-pipeline` directory (e.g. `poetry run python3`), and then update the file with the following commands:
```
import pickle
from pathlib import Path
import pandas as pd
data_path = Path.cwd()
# score data expected
score_csv_path = data_path / "data_pipeline" / "data" / "score" / "csv" / "full" / "usa.csv"
score_initial_df = pd.read_csv(score_csv_path, dtype={"GEOID10_TRACT": "string"}, low_memory=False, nrows=2)
score_initial_df.to_csv(data_path / "data_pipeline" / "etl" / "score" / "tests" / "sample_data" /"score_data_initial.csv", index=False)
```
Now you can move on to updating individual pickles for the tests.
> :bulb: **NOTE**
> It is helpful to perform the steps in VS Code, and in this order.
We have four pickle files that correspond to expected files:
| Pickle | Purpose |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `score_data_expected.pkl` | Initial score without counties |
| `score_transformed_expected.pkl` | Intermediate score with `etl._extract_score` and `etl. _transform_score` applied. There's no file for this intermediate process, so we need to capture the pickle mid-process. |
| `tile_data_expected.pkl` | Score with columns to be baked in tiles |
| `downloadable_data_expected.pk1` | Downloadable csv |
To update the pickles, go one by one:
For the `score_transformed_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L62), before the `pdt.assert_frame_equal` and run:
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_transform_score`
Once on the breakpoint, capture the df to a pickle as follows:
```
import pickle
from pathlib import Path
data_path = Path.cwd()
score_transformed_actual.to_pickle(data_path / "data_pipeline" / "etl" / "score" / "tests" / "snapshots" / "score_transformed_expected.pkl", protocol=4)
```
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_transform_score`
For the `score_data_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L78), before the `pdt.assert_frame_equal` and run:
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_score_data`
Once on the breakpoint, capture the df to a pickle as follows:
```
import pickle
from pathlib import Path
data_path = Path.cwd()
score_data_actual.to_pickle(data_path / "data_pipeline" / "etl" / "score" / "tests" / "snapshots" / "score_data_expected.pkl", protocol=4)
```
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_score_data`
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
For the `tile_data_expected.pkl`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L90), before the `pdt.assert_frame_equal` and run:
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_tile_data`
Once on the breakpoint, capture the df to a pickle as follows:
```
import pickle
from pathlib import Path
data_path = Path.cwd()
output_tiles_df_actual.to_pickle(data_path / "data_pipeline" / "etl" / "score" / "tests" / "snapshots" / "tile_data_expected.pkl", protocol=4)
```
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_tile_data`
For the `downloadable_data_expected.pk1`, put a breakpoint on [this line](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/data_pipeline/etl/score/tests/test_score_post.py#L98), before the `pdt.assert_frame_equal` and run:
`pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_downloadable_data`
Once on the breakpoint, capture the df to a pickle as follows:
```
import pickle
from pathlib import Path
data_path = Path.cwd()
output_downloadable_df_actual.to_pickle(data_path / "data_pipeline" / "etl" / "score" / "tests" / "snapshots" / "downloadable_data_expected.pkl", protocol=4)
```
Then take out the breakpoint and re-run the test: `pytest data_pipeline/etl/score/tests/test_score_post.py::test_create_downloadable_data`
#### Future Enhancements
Pickles have several downsides that we should consider alternatives for:
1. They are opaque - it is necessary to open a python interpreter (as written above) to confirm its contents
2. They are a bit harder for newcomers to python to grok.
3. They potentially encode flawed typing assumptions (see above) which are paved over for future test runs.
In the future, we could adopt any of the below strategies to work around this:
1. We could use [pytest-snapshot](https://pypi.org/project/pytest-snapshot/) to automatically store the output of each test as data changes. This would make it so that you could avoid having to generate a pickle for each method - instead, you would only need to call `generate` once , and only when the dataframe had changed.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
<!-- markdown-link-check-disable -->
Additionally, you could use a pandas type schema annotation such as [pandera](https://pandera.readthedocs.io/en/stable/schema_models.html?highlight=inputschema#basic-usage) to annotate input/output schemas for given functions, and your unit tests could use these to validate explicitly. This could be of very high value for annotating expectations.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
<!-- markdown-link-check-enable-->
Alternatively, or in conjunction, you could move toward using a more strictly-typed container format for read/writes such as SQL/SQLite, and use something like [SQLModel](https://github.com/tiangolo/sqlmodel) to handle more explicit type guarantees.
### Fixtures used in ETL "Snapshot Tests"
ETLs are tested for the results of their extract, transform, and load steps by borrowing the concept of "snapshot testing" from the world of front-end development.
Snapshots are easy to update and demonstrate the results of a series of changes to the code base. They are good for making sure no results have changed if you don't expect them to change, and they are good when you expect the results to significantly change in a way that would be tedious to update in traditional unit tests.
However, snapshot tests are also dangerous. An unthinking developer may update the snapshot fixtures and unknowingly encode a bug into the supposed intended output of the test.
In order to update the snapshot fixtures of an ETL class, follow the following steps:
1. If you need to manually update the fixtures, update the "furthest upstream" source that is called by `_setup_etl_instance_and_run_extract`. For instance, this may involve creating a new zip file that imitates the source data. (e.g., for the National Risk Index test, update `data_pipeline/tests/sources/national_risk_index/data/NRI_Table_CensusTracts.zip` which is a 64kb imitation of the 405MB source NRI data.)
2. Run `pytest . -rsx --update_snapshots` to update snapshots for all files, or you can pass a specific file name to pytest to be more precise (e.g., `pytest data_pipeline/tests/sources/national_risk_index/test_etl.py -rsx --update_snapshots`)
3. Re-run pytest without the `update_snapshots` flag (e.g., `pytest . -rsx`) to ensure the tests now pass.
4. Carefully check the `git diff` for the updates to all test fixtures to make sure these are as expected. This part is very important. For instance, if you changed a column name, you would only expect the column name to change in the output. If you modified the calculation of some data, spot check the results to see if the numbers in the updated fixtures are as expected.
### Other ETL Unit Tests
Outside of the ETL snapshot tests discussed above, ETL unit tests are typically organized into three buckets:
- Extract Tests
- Transform Tests, and
- Load Tests
These are tested using different strategies explained below.
#### Extract Tests
Extract tests rely on the limited data transformations that occur as data is loaded from source files.
In tests, we use fake, limited CSVs read via `StringIO` , taken from the first several rows of the files of interest, and ensure data types are correct.
Down the line, we could use a tool like [Pandera](https://pandera.readthedocs.io/) to enforce schemas, both for the tests and the classes themselves.
#### Transform Tests
Transform tests are the heart of ETL unit tests, and compare ideal dataframes with their actual counterparts.
See above [Fixtures](#configuration--fixtures) section for information about where data is coming from.
#### Load Tests
These make use of [tmp_path_factory](https://docs.pytest.org/en/latest/how-to/tmp_path.html) to create a file-system located under `temp_dir`, and validate whether the correct files are written to the correct locations.
Additional future modifications could include the use of Pandera and/or other schema validation tools, and or a more explicit test that the data written to file can be read back in and yield the same dataframe.
Backend release branch to main (#1822) * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * Create deploy_be_staging.yml (#1575) * Imputing income using geographic neighbors (#1559) Imputes income field with a light refactor. Needs more refactor and more tests (I spotchecked). Next ticket will check and address but a lot of "narwhal" architecture is here. * Adding HOLC indicator (#1579) Added HOLC indicator (Historic Redlining Score) from NCRC work; included 3.25 cutoff and low income as part of the housing burden category. * Update backend for Puerto Rico (#1686) * Update PR threshold count to 10 We now show 10 indicators for PR. See the discussion on the github issue for more info: https://github.com/usds/justice40-tool/issues/1621 * Do not use linguistic iso for Puerto Rico Closes 1350. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * updating * Do not drop Guam and USVI from ETL (#1681) * Remove code that drops Guam and USVI from ETL * Add back code for dropping rows by FIPS code We may want this functionality, so let's keep it and just make the constant currently be an empty array. Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> * Emma nechamkin/holc patch (#1742) Removing HOLC calculation from score narwhal. * updating ejscreen data, try two (#1747) * Rescaling linguistic isolation (#1750) Rescales linguistic isolation to drop puerto rico * adds UST indicator (#1786) adds leaky underground storage tanks * Changing LHE in tiles to a boolean (#1767) also includes merging / clean up of the release * added indoor plumbing to chas * added indoor plumbing to score housing burden * added indoor plumbing to score housing burden * first run through * Refactor DOE Energy Burden and COI to use YAML (#1796) * added tribalId for Supplemental dataset (#1804) * Setting zoom levels for tribal map (#1810) * NRI dataset and initial score YAML configuration (#1534) * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * update be staging gha * checkpoint * update be staging gha * NRI dataset and initial score YAML configuration * checkpoint * adding data checks for release branch * passing tests * adding INPUT_EXTRACTED_FILE_NAME to base class * lint * columns to keep and tests * checkpoint * PR Review * renoving source url * tests * stop execution of ETL if there's a YAML schema issue * update be staging gha * adding source url as class var again * clean up * force cache bust * gha cache bust * dynamically set score vars from YAML * docsctrings * removing last updated year - optional reverse percentile * passing tests * sort order * column ordening * PR review * class level vars * Updating DatasetsConfig * fix pylint errors * moving metadata hint back to code Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Correct copy typo (#1809) * Add basic test suite for COI (#1518) * Update COI to use new yaml (#1518) * Add tests for DOE energy budren (1518 * Add dataset config for energy budren (1518) * Refactor ETL to use datasets.yml (#1518) * Add fake GEOIDs to COI tests (#1518) * Refactor _setup_etl_instance_and_run_extract to base (#1518) For the three classes we've done so far, a generic _setup_etl_instance_and_run_extract will work fine, for the moment we can reuse the same setup method until we decide future classes need more flexibility --- but they can also always subclass so... * Add output-path tests (#1518) * Update YAML to match constant (#1518) * Don't blindly set float format (#1518) * Add defaults for extract (#1518) * Run YAML load on all subclasses (#1518) * Update description fields (#1518) * Update YAML per final format (#1518) * Update fixture tract IDs (#1518) * Update base class refactor (#1518) Now that NRI is final I needed to make a small number of updates to my refactored code. * Remove old comment (#1518) * Fix type signature and return (#1518) * Update per code review (#1518) Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Vim <86254807+vim-usds@users.noreply.github.com> * Update etl_score_geo.py Yikes! Fixing merge messup! * updated to fix linting errors (#1818) Cleans and updates base branch * Adding back MapComparison video * Add FUDS ETL (#1817) * Add spatial join method (#1871) Since we'll need to figure out the tracts for a large number of points in future tickets, add a utility to handle grabbing the tract geometries and adding tract data to a point dataset. * Add FUDS, also jupyter lab (#1871) * Add YAML configs for FUDS (#1871) * Allow input geoid to be optional (#1871) * Add FUDS ETL, tests, test-datae noteobook (#1871) This adds the ETL class for Formerly Used Defense Sites (FUDS). This is different from most other ETLs since these FUDS are not provided by tract, but instead by geographic point, so we need to assign FUDS to tracts and then do calculations from there. * Floats -> Ints, as I intended (#1871) * Floats -> Ints, as I intended (#1871) * Formatting fixes (#1871) * Add test false positive GEOIDs (#1871) * Add gdal binaries (#1871) * Refactor pandas code to be more idiomatic (#1871) Per Emma, the more pandas-y way of doing my counts is using np.where to add the values i need, then groupby and size. It is definitely more compact, and also I think more correct! * Update configs per Emma suggestions (#1871) * Type fixed! (#1871) * Remove spurious import from vscode (#1871) * Snapshot update after changing col name (#1871) * Move up GDAL (#1871) * Adjust geojson strategy (#1871) * Try running census separately first (#1871) * Fix import order (#1871) * Cleanup cache strategy (#1871) * Download census data from S3 instead of re-calculating (#1871) * Clarify pandas code per Emma (#1871) * Disable markdown check for link * Adding DOT composite to travel score (#1820) This adds the DOT dataset to the ETL and to the score. Note that currently we take a percentile of an average of percentiles. * Adding first street foundation data (#1823) Adding FSF flood and wildfire risk datasets to the score. * first run -- adding NCLD data to the ETL, but not yet to the score * Add abandoned mine lands data (#1824) * Add notebook to generate test data (#1780) * Add Abandoned Mine Land data (#1780) Using a similar structure but simpler apporach compared to FUDs, add an indicator for whether a tract has an abandonded mine. * Adding some detail to dataset readmes Just a thought! * Apply feedback from revieiw (#1780) * Fixup bad string that broke test (#1780) * Update a string that I should have renamed (#1780) * Reduce number of threads to reduce memory pressure (#1780) * Try not running geo data (#1780) * Run the high-memory sets separately (#1780) * Actually deduplicate (#1780) * Add flag for memory intensive ETLs (#1780) * Document new flag for datasets (#1780) * Add flag for new datasets fro rebase (#1780) Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> * Adding NLCD data (#1826) Adding NLCD's natural space indicator end to end to the score. * Add donut hole calculation to score (#1828) Adds adjacency index to the pipeline. Requires thorough QA * Adding eamlis and fuds data to legacy pollution in score (#1832) Update to add EAMLIS and FUDS data to score * Update to use new FSF files (#1838) backend is partially done! * Quick fix to kitchen or plumbing indicator Yikes! I think I messed something up and dropped the pctile field suffix from when the KP score gets calculated. Fixing right quick. * Fast flag update (#1844) Added additional flags for the front end based on our conversation in stand up this morning. * Tiles fix (#1845) Fixes score-geo and adds flags * Update etl_score_geo.py * Issue 1827: Add demographics to tiles and download files (#1833) * Adding demographics for use in sidebar and download files * Updates backend constants to N (#1854) * updated to show T/F/null vs T/F for AML and FUDS (#1866) * fix markdown * just testing that the boolean is preserved on gha * checking drop tracts works * OOPS! Old changes persisted * adding a check to the agvalue calculation for nri * updated with error messages * updated error message * tuple type * Score tests (#1847) * update Python version on README; tuple typing fix * Alaska tribal points fix (#1821) * Bump mistune from 0.8.4 to 2.0.3 in /data/data-pipeline (#1777) Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. - [Release notes](https://github.com/lepture/mistune/releases) - [Changelog](https://github.com/lepture/mistune/blob/master/docs/changes.rst) - [Commits](https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3) --- updated-dependencies: - dependency-name: mistune dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * poetry update * initial pass of score tests * add threshold tests * added ses threshold (not donut, not island) * testing suite -- stopping for the day * added test for lead proxy indicator * Refactor score tests to make them less verbose and more direct (#1865) * Cleanup tests slightly before refactor (#1846) * Refactor score calculations tests * Feedback from review * Refactor output tests like calculatoin tests (#1846) (#1870) * Reorganize files (#1846) * Switch from lru_cache to fixture scorpes (#1846) * Add tests for all factors (#1846) * Mark smoketests and run as part of be deply (#1846) * Update renamed var (#1846) * Switch from named tuple to dataclass (#1846) This is annoying, but pylint in python3.8 was crashing parsing the named tuple. We weren't using any namedtuple-specific features, so I made the type a dataclass just to get pylint to behave. * Add default timout to requests (#1846) * Fix type (#1846) * Fix merge mistake on poetry.lock (#1846) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * just testing that the boolean is preserved on gha (#1867) * updated with hopefully a fix; coercing aml, fuds, hrs to booleans for the raw value to preserve null character. * Adding tests to ensure proper calculations (#1871) * just testing that the boolean is preserved on gha * checking drop tracts works * adding a check to the agvalue calculation for nri * updated with error messages * tribal tiles fix (#1874) * Alaska tribal points fix (#1821) * tribal tiles fix * disabling child opportunity * lint * removing COI * removing commented out code * Pipeline tile tests (#1864) * temp update * updating with fips check * adding check on pfs * updating with pfs test * Update test_tiles_smoketests.py * Fix lint errors (#1848) * Add column names test (#1848) * Mark tests as smoketests (#1848) * Move to other score-related tests (#1848) * Recast Total threshold criteria exceeded to int (#1848) In writing tests to verify the output of the tiles csv matches the final score CSV, I noticed TC/Total threshold criteria exceeded was getting cast from an int64 to a float64 in the process of PostScoreETL. I tracked it down to the line where we merge the score dataframe with constants.DATA_CENSUS_CSV_FILE_PATH --- there where > 100 tracts in the national census CSV that don't exist in the score, so those ended up with a Total threshhold count of np.nan, which is a float, and thereby cast those columns to float. For the moment I just cast it back. * No need for low memeory (#1848) * Add additional tests of tiles.csv (#1848) * Drop pre-2010 rows before computing score (#1848) Note this is probably NOT the optimal place for this change; it might make more sense for each source to filter its own tracts down to the acceptable tract list. However, that would be a pretty invasive change, where this is central and plenty of other things are happening in score transform that could be moved to sources, so for today, here's where the change will live. * Fix typo (#1848) * Switch from filter to inner join (#1848) * Remove no-op lines from tiles (#1848) * Apply feedback from review, linter (#1848) * Check the values oeverything in the frame (#1848) * Refactor checker class (#1848) * Add test for state names (#1848) * cleanup from reviewing my own code (#1848) * Fix lint error (#1858) * Apply Emma's feedback from review (#1848) * Remove refs to national_df (#1848) * Account for new, fake nullable bools in tiles (#1848) To handle a geojson limitation, Emma converted some nullable boolean colunms to float64 in the tiles export with the values {0.0, 1.0, nan}, giving us the same expressiveness. Sadly, this broke my assumption that all columns between the score and tiles csvs would have the same dtypes, so I need to account for these new, fake bools in my test. * Use equals instead of my worse version (#1848) * Missed a spot where we called _create_score_data (#1848) * Update per safety (#1848) Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Add tests to make sure each source makes it to the score correctly (#1878) * Remove unused persistent poverty from score (#1835) * Test a few datasets for overlap in the final score (#1835) * Add remaining data sources (#1853) * Apply code-review feedback (#1835) * Rearrange a little for readabililty (#1835) * Add tract test (#1835) * Add test for score values (#1835) * Check for unmatched source tracts (#1835) * Cleanup numeric code to plaintext (#1835) * Make import more obvious (#1835) * Updating traffic barriers to include low pop threshold (#1889) Changing the traffic barriers to only be included for places with recorded population * Remove no land tracts from map (#1894) remove from map * Issue 1831: missing life expectancy data from Maine and Wisconsin (#1887) * Fixing missing states and adding tests for states to all classes * Removing low pop tracts from FEMA population loss (#1898) dropping 0 population from FEMA * 1831 Follow up (#1902) This code causes no functional change to the code. It does two things: 1. Uses difference instead of - to improve code style for working with sets. 2. Removes the line EXPECTED_MISSING_STATES = ["02", "15"], which is now redundant because of the line I added (in a previous pull request) of ALASKA_AND_HAWAII_EXPECTED_IN_DATA = False. * Add tests for all non-census sources (#1899) * Refactor CDC life-expectancy (1554) * Update to new tract list (#1554) * Adjust for tests (#1848) * Add tests for cdc_places (#1848) * Add EJScreen tests (#1848) * Add tests for HUD housing (#1848) * Add tests for GeoCorr (#1848) * Add persistent poverty tests (#1848) * Update for sources without zips, for new validation (#1848) * Update tests for new multi-CSV but (#1848) Lucas updated the CDC life expectancy data to handle a bug where two states are missing from the US Overall download. Since virtually none of our other ETL classes download multiple CSVs directly like this, it required a pretty invasive new mocking strategy. * Add basic tests for nature deprived (#1848) * Add wildfire tests (#1848) * Add flood risk tests (#1848) * Add DOT travel tests (#1848) * Add historic redlining tests (#1848) * Add tests for ME and WI (#1848) * Update now that validation exists (#1848) * Adjust for validation (#1848) * Add health insurance back to cdc places (#1848) Ooops * Update tests with new field (#1848) * Test for blank tract removal (#1848) * Add tracts for clipping behavior * Test clipping and zfill behavior (#1848) * Fix bad test assumption (#1848) * Simplify class, add test for tract padding (#1848) * Fix percentage inversion, update tests (#1848) Looking through the transformations, I noticed that we were subtracting a percentage that is usually between 0-100 from 1 instead of 100, and so were endind up with some surprising results. Confirmed with lucasmbrown-usds * Add note about first street data (#1848) * Issue 1900: Tribal overlap with Census tracts (#1903) * working notebook * updating notebook * wip * fixing broken tests * adding tribal overlap files * WIP * WIP * WIP, calculated count and names * working * partial cleanup * partial cleanup * updating field names * fixing bug * removing pyogrio * removing unused imports * updating test fixtures to be more realistic * cleaning up notebook * fixing black * fixing flake8 errors * adding tox instructions * updating etl_score * suppressing warning * Use projected CRSes, ignore geom types (#1900) I looked into this a bit, and in general the geometry type mismatch changes very little about the calculation; we have a mix of multipolygons and polygons. The fastest thing to do is just not keep geom type; I did some runs with it set to both True and False, and they're the same within 9 digits of precision. Logically we just want to overlaps, regardless of how the actual geometries are encoded between the frames, so we can in this case ignore the geom types and feel OKAY. I also moved to projected CRSes, since we are actually trying to do area calculations and so like, we should. Again, the change is small in magnitude but logically more sound. * Readd CDC dataset config (#1900) * adding comments to fips code * delete unnecessary loggers Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Improve score test documentation based on Lucas's feedback (#1835) (#1914) * Better document base on Lucas's feedback (#1835) * Fix typo (#1835) * Add test to verify GEOJSON matches tiles (#1835) * Remove NOOP line (#1835) * Move GEOJSON generation up for new smoketest (#1835) * Fixup code format (#1835) * Update readme for new somketest (#1835) * Cleanup source tests (#1912) * Move test to base for broader coverage (#1848) * Remove duplicate line (#1848) * FUDS needed an extra mock (#1848) * Add tribal count notebook (#1917) (#1919) * Add tribal count notebook (#1917) * test without caching * added comment Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> * Add tribal overlap to downloads (#1907) * Add tribal data to downloads (#1904) * Update test pickle with current cols (#1904) * Remove text of tribe names from GeoJSON (#1904) * Update test data (#1904) * Add tribal overlap to smoketests (#1904) * Issue 1910: Do not impute income for 0 population tracts (#1918) * should be working, has unnecessary loggers * removing loggers and cleaning up * updating ejscreen tests * adding tests and responding to PR feedback * fixing broken smoke test * delete smoketest docs * updating click * updating click * Bump just jupyterlab (#1930) * Fixing link checker (#1929) * Update deps safety says are vulnerable (#1937) (#1938) Co-authored-by: matt bowen <matt@mattbowen.net> * Add demos for island areas (#1932) * Backfill population in island areas (#1882) * Update smoketest to account for backfills (#1882) As I wrote in the commend: We backfill island areas with data from the 2010 census, so if THOSE tracts have data beyond the data source, that's to be expected and is fine to pass. If some other state or territory does though, this should fail This ends up being a nice way of documenting that behavior i guess! * Fixup lint issues (#1882) * Add in race demos to 2010 census pull (#1851) * Add backfill data to score (#1851) * Change column name (#1851) * Fill demos after the score (#1851) * Add income back, adjust test (#1882) * Apply code-review feedback (#1851) * Add test for island area backfill (#1851) * Fix bad rename (#1851) * Reorder download fields, add plumbing back (#1942) * Add back lack of plumbing fields (#1920) * Reorder fields for excel (#1921) * Reorder excel fields (#1921) * Fix formating, lint errors, pickes (#1921) * Add missing plumbing col, fix order again (#1921) * Update that pickle (#1921) * refactoring tribal (#1960) * updated with scoring comparison * updated for narhwal -- leaving commented code in for now * pydantic upgrade * produce a string for the front end to ingest (#1963) * wip * i believe this works -- let's see the pipeline * updated fixtures * Adding ADJLI_ET (#1976) * updated tile data * ensuring adjli_et in * Add back income percentile (#1977) * Add missing field to download (#1964) * Remove pydantic since it's unused (#1964) * Add percentile to CSV (#1964) * Update downloadable pickle (#1964) * Issue 105: Configure and run `black` and other pre-commit hooks (clean branch) (#1962) * Configure and run `black` and other pre-commit hooks Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> * Removing fixed python version for black (#1985) * Fixup TA_COUNT and TA_PERC (#1991) * Change TA_PERC, change TA_COUNT (#1988, #1989) - Make TA_PERC_STR back into a nullable float following the rules requestsed in #1989 - Move TA_COUNT to be TA_COUNT_AK, also add a null TA_COUNT_C for CONUS that we can fill in later. * Fix typo comment (#1988) * Issue 1992: Do not impute income for null population tracts (#1993) * Hotfix for DOT data source DNS issue (#1999) * Make tribal overlap set score N (#2004) * Add "Is a Tribal DAC" field (#1998) * Add tribal DACs to score N final (#1998) * Add new fields to downloads (#1998) * Make a int a float (#1998) * Update field names, apply feedback (#1998) * Add assertions around codebook (#2014) * Add assertion around codebook (#1505) * Assert csv and excel have same cols (#1505) * Remove suffixes from tribal lands (#1974) (#2008) * Data source location (#2015) * data source location * toml * cdc_places * cdc_svi_index * url updates * child oppy and dot travel * up to hud_recap * completed ticket * cache bust * hud_recap * us_army_fuds * Remove vars the frontend doesn't use (#2020) (#2022) I did a pretty rough and simple analysis of the variables we put in the tiles and grepped the frontend code to see if (1) they're ever accessed and (2) if they're used, even if they're read once. I removed everything I noticed was not accessed. * Disable file size limits on tiles (#2031) * Disable file size limits on tiles * Remove print debugs I know. * Update file name pattern (#2037) (#2038) * Update file name pattern (#2037) * Remove ETL from generation (2037) I looked more carefully, and this ETL step isn't used in the score, so there's no need to run it every time. Per previous steps, I removed it from constants so the code is there it won't run by default. * Round ALL the float fields for the tiles (#2040) * Round ALL the float fields for the tiles (#2033) * Floor in a simpler way (#2033) Emma pointed out that all teh stuff we're doing in floor_series is probably unnecessary for this case, so just use the built-in floor. * Update pickle I missed (#2033) * Clean commit of just aggregate burden notebook (#1819) added a burden notebook * Update the dockerfile (#2045) * Update so the image builds (#2026) * Fix bad dict (2026) * Rename census tract field in downloads (#2068) * Change tract ID field name (2060) * Update lockfile (#2061) * Bump safety, jupyter, wheel (#2061) * DOn't depend directly on wheel (2061) * Bring narwhal reqs in line with main * Update tribal area counts (#2071) * Rename tribal area field (2062) * Add missing file (#2062) * Add checks to create version (#2047) (#2052) * Fix failing safety (#2114) * Ignore vuln that doesn't affect us 2113 https://nvd.nist.gov/vuln/detail/CVE-2022-42969 landed recently and there's no fix in py (which is maintenance mode). From my analysis, that CVE cannot hurt us (famous last words), so we'll ignore the vuln for now. * 2113 Update our gdal ppa * that didn't work (2113) * Don't add the PPA, the package exists (#2113) * Fix type (#2113) * Force an update of wheel 2113 * Also remove PPA line from create-score-versions * Drop 3.8 because of wheel 2113 * Put back 3.8, use newer actions * Try another way of upgrading wheel 2113 * Upgrade wheel in tox too 2113 * Typo fix 2113 Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com> Co-authored-by: Shelby Switzer <shelby.c.switzer@omb.eop.gov> Co-authored-by: Shelby Switzer <shelbyswitzer@gmail.com> Co-authored-by: Emma Nechamkin <Emma.J.Nechamkin@omb.eop.gov> Co-authored-by: Matt Bowen <83967628+mattbowen-usds@users.noreply.github.com> Co-authored-by: Jorge Escobar <83969469+esfoobar-usds@users.noreply.github.com> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov> Co-authored-by: Jorge Escobar <jorge.e.escobar@omb.eop.gov> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov> Co-authored-by: matt bowen <matt@mattbowen.net>
2022-12-01 18:50:54 -08:00
### Smoketests
To ensure the score and tiles process correctly, there is a suite of "smoke tests" that can be run after the ETL and score data have been run, and outputs like the frontend GEOJSON have been created.
These tests are implemented as pytest test, but are skipped by default. To run them.
1. Generate a full score with `poetry run python3 data_pipeline/application.py score-full-run`
2. Generate the tile data with `poetry run python3 data_pipeline/application.py generate-score-post`
3. Generate the frontend GEOJSON with `poetry run python3 data_pipeline/application.py geo-score`
4. Select the smoke tests for pytest with `poetry run pytest data_pipeline/tests -k smoketest`