Update data documentation and some data steps (#407)

* Minor documentation updates, plus calenvironscreen  S3 URL fix

* Update score comparison docs and code

* Add steps for running the comparison tool
* Update HUD recap ETL to ensure GEOID is imported as a string (if it is
imported as an interger by default it  will strip the beginning "0" from
many IDs)

* Add note about execution time

* Move step from paragraph to list

* Update output dir in README for comp tool

Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
This commit is contained in:
Shelby Switzer 2021-07-29 10:28:52 -04:00 committed by GitHub
commit 387ee3a382
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 1528 additions and 20 deletions

View file

@ -55,6 +55,6 @@ Si bien se puede usar cualquier IDE, describimos cómo configurar VS Code
Para el desarrollo de front-end, lea el [Client README](client/README.md). Para el desarrollo de front-end, lea el [Client README](client/README.md).
Para el desorrollo de back-end, lea el [Score README](score/README.md). Para el desorrollo de back-end, lea el [Data Pipeline README](data/data-pipeline/README.md).
Para la implementación de software, lea el [Infrastructure README](infrastructure/README.md). Para la implementación de software, lea el [Infrastructure README](infrastructure/README.md).

View file

@ -78,7 +78,7 @@ There is also a docker-compose file that will eventually include everything you
If you wish to check out our client-side code (i.e. the GatsbyJS statically generated website with the map), check out the `client` directory and its [README](client/README.md). If you wish to check out our client-side code (i.e. the GatsbyJS statically generated website with the map), check out the `client` directory and its [README](client/README.md).
**Running the backend** **Running the backend**
If you want to run the data pipeline with ETL and score generation, check out the `score` directory and its [README](score/README.md). If you want to run the data pipeline with ETL and score generation, check out the `data/data-pipeline` directory and its [README](data/data-pipeline/README.md).
**Deployment** **Deployment**
For core team contributors working on deployment, check out the `infrastructure` directory and its [README](infrastructure/README.md) for information on deploying the backend to AWS. For core team contributors working on deployment, check out the `infrastructure` directory and its [README](infrastructure/README.md) for information on deploying the backend to AWS.

View file

@ -34,7 +34,7 @@ This application is used to compare experimental versions of the Justice40 score
_**NOTE:** These scores **do not** represent final versions of the Justice40 scores and are merely used for comparative purposes. As a result, the specific input columns and formulas used to calculate them are likely to change over time._ _**NOTE:** These scores **do not** represent final versions of the Justice40 scores and are merely used for comparative purposes. As a result, the specific input columns and formulas used to calculate them are likely to change over time._
### Score comparison workflow ### Score generation and comparison workflow
The descriptions below provide a more detailed outline of what happens at each step of ETL and score calculation workflow. The descriptions below provide a more detailed outline of what happens at each step of ETL and score calculation workflow.
@ -58,8 +58,8 @@ TODO add mermaid diagram
1. The `etl-run` command will execute the corresponding ETL script for each data source in `etl/sources/`. For example, `etl/sources/ejscreen/etl.py` is the ETL script for EJSCREEN data. 1. The `etl-run` command will execute the corresponding ETL script for each data source in `etl/sources/`. For example, `etl/sources/ejscreen/etl.py` is the ETL script for EJSCREEN data.
1. Each ETL script will extract the data from its original source, then format the data into `.csv` files that get stored in the relevant folder in `data/dataset/`. For example, HUD Housing data is stored in `data/dataset/hud_housing/usa.csv` 1. Each ETL script will extract the data from its original source, then format the data into `.csv` files that get stored in the relevant folder in `data/dataset/`. For example, HUD Housing data is stored in `data/dataset/hud_housing/usa.csv`
_**NOTE:** You have the option to pass the name of a specific data source to the `etl-run` command, which will limit the execution of the ETL process to that specific data source._ _**NOTE:** You have the option to pass the name of a specific data source to the `etl-run` command using the `-d` flag, which will limit the execution of the ETL process to that specific data source._
_For example: `poetry run python application.py etl-run ejscreen` would only run the ETL process for EJSCREEN data._ _For example: `poetry run python application.py etl-run -d ejscreen` would only run the ETL process for EJSCREEN data._
#### Step 2: Calculate the Justice40 score experiments #### Step 2: Calculate the Justice40 score experiments
@ -74,7 +74,31 @@ _For example: `poetry run python application.py etl-run ejscreen` would only run
#### Step 3: Compare the Justice40 score experiments to other indices #### Step 3: Compare the Justice40 score experiments to other indices
1. TODO: Describe the steps for this We are building a comparison tool to enable easy (or at least straightforward) comparison of the Justice40 score with other existing indices. The goal of having this is so that as we experiment and iterate with a scoring methodology, we can understand how our score overlaps with or differs from other indices that communities, nonprofits, and governmentss use to inform decision making.
Right now, our comparison tool exists simply as a python notebook in `data/data-pipeline/ipython/scoring_comparison.ipynb`.
To run this comparison tool:
1. Make sure you've gone through the above steps to run the data ETL and score generation.
1. From this directory (`data/data-pipeline`), navigate to the `ipython` directory: `cd ipython`.
1. Ensure you have `pandoc` installed on your computer. If you're on a Mac, run `brew install pandoc`; for other OSes, see pandoc's [installation guide](https://pandoc.org/installing.html).
1. Install the extra dependencies:
```
pip install pypandoc
pip install requests
pip install us
pip install tqdm
pip install dynaconf
pip instal xlsxwriter
```
1. Start the notebooks: `jupyter notebook`
1. In your browser, navigate to one of the URLs returned by the above command.
1. Select `scoring_comparison.ipynb` from the options in your browser.
1. Run through the steps in the notebook. You can step through them one at a time by clicking the "Run" button for each cell, or open the "Cell" menu and click "Run all" to run them all at once.
1. Reports and spreadsheets generated by the comparison tool will be available in `data/data-pipeline/data/comparison_outputs`.
*NOTE:* This may take several minutes or over an hour to fully execute and generate the reports.
### Data Sources ### Data Sources

File diff suppressed because it is too large Load diff