mirror of
https://github.com/DOI-DO/j40-cejst-2.git
synced 2025-07-31 03:11:16 -07:00
Update data documentation and some data steps (#407)
* Minor documentation updates, plus calenvironscreen S3 URL fix * Update score comparison docs and code * Add steps for running the comparison tool * Update HUD recap ETL to ensure GEOID is imported as a string (if it is imported as an interger by default it will strip the beginning "0" from many IDs) * Add note about execution time * Move step from paragraph to list * Update output dir in README for comp tool Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
This commit is contained in:
parent
b404fdcc43
commit
387ee3a382
4 changed files with 1528 additions and 20 deletions
|
@ -34,7 +34,7 @@ This application is used to compare experimental versions of the Justice40 score
|
|||
_**NOTE:** These scores **do not** represent final versions of the Justice40 scores and are merely used for comparative purposes. As a result, the specific input columns and formulas used to calculate them are likely to change over time._
|
||||
|
||||
|
||||
### Score comparison workflow
|
||||
### Score generation and comparison workflow
|
||||
|
||||
The descriptions below provide a more detailed outline of what happens at each step of ETL and score calculation workflow.
|
||||
|
||||
|
@ -58,8 +58,8 @@ TODO add mermaid diagram
|
|||
1. The `etl-run` command will execute the corresponding ETL script for each data source in `etl/sources/`. For example, `etl/sources/ejscreen/etl.py` is the ETL script for EJSCREEN data.
|
||||
1. Each ETL script will extract the data from its original source, then format the data into `.csv` files that get stored in the relevant folder in `data/dataset/`. For example, HUD Housing data is stored in `data/dataset/hud_housing/usa.csv`
|
||||
|
||||
_**NOTE:** You have the option to pass the name of a specific data source to the `etl-run` command, which will limit the execution of the ETL process to that specific data source._
|
||||
_For example: `poetry run python application.py etl-run ejscreen` would only run the ETL process for EJSCREEN data._
|
||||
_**NOTE:** You have the option to pass the name of a specific data source to the `etl-run` command using the `-d` flag, which will limit the execution of the ETL process to that specific data source._
|
||||
_For example: `poetry run python application.py etl-run -d ejscreen` would only run the ETL process for EJSCREEN data._
|
||||
|
||||
#### Step 2: Calculate the Justice40 score experiments
|
||||
|
||||
|
@ -74,7 +74,31 @@ _For example: `poetry run python application.py etl-run ejscreen` would only run
|
|||
|
||||
#### Step 3: Compare the Justice40 score experiments to other indices
|
||||
|
||||
1. TODO: Describe the steps for this
|
||||
We are building a comparison tool to enable easy (or at least straightforward) comparison of the Justice40 score with other existing indices. The goal of having this is so that as we experiment and iterate with a scoring methodology, we can understand how our score overlaps with or differs from other indices that communities, nonprofits, and governmentss use to inform decision making.
|
||||
|
||||
Right now, our comparison tool exists simply as a python notebook in `data/data-pipeline/ipython/scoring_comparison.ipynb`.
|
||||
|
||||
To run this comparison tool:
|
||||
|
||||
1. Make sure you've gone through the above steps to run the data ETL and score generation.
|
||||
1. From this directory (`data/data-pipeline`), navigate to the `ipython` directory: `cd ipython`.
|
||||
1. Ensure you have `pandoc` installed on your computer. If you're on a Mac, run `brew install pandoc`; for other OSes, see pandoc's [installation guide](https://pandoc.org/installing.html).
|
||||
1. Install the extra dependencies:
|
||||
```
|
||||
pip install pypandoc
|
||||
pip install requests
|
||||
pip install us
|
||||
pip install tqdm
|
||||
pip install dynaconf
|
||||
pip instal xlsxwriter
|
||||
```
|
||||
1. Start the notebooks: `jupyter notebook`
|
||||
1. In your browser, navigate to one of the URLs returned by the above command.
|
||||
1. Select `scoring_comparison.ipynb` from the options in your browser.
|
||||
1. Run through the steps in the notebook. You can step through them one at a time by clicking the "Run" button for each cell, or open the "Cell" menu and click "Run all" to run them all at once.
|
||||
1. Reports and spreadsheets generated by the comparison tool will be available in `data/data-pipeline/data/comparison_outputs`.
|
||||
|
||||
*NOTE:* This may take several minutes or over an hour to fully execute and generate the reports.
|
||||
|
||||
### Data Sources
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue