mirror of
https://github.com/DOI-DO/j40-cejst-2.git
synced 2025-08-08 05:04:18 -07:00
Update Side Panel Tile Data (#866)
* Update Side Panel Tile Data * Update Side Panel Tile Data * Correct indicator names to match csv * Replace Score with Rate * Comment out FEMA Loss Rate to troubleshoot * Removes all "FEMA Loss Rate" array elements * Revert FEMA to Score * Remove expected loss rate * Remove RMP and NPL from BASIC array * Attempt to make shape mismatch align - update README typo * Add Score L indicators to TILE_SCORE_FLOAT_COLUMNS * removing cbg references * completes the ticket * Update side panel fields * Update index file writing to create parent dir * Updates from linting * fixing missing field_names for island territories 90th percentile fields * Update downloadable fields and fix field name * Update file fields and tests * Update ordering of fields and leave TODO * Update pickle after re-ordering of file * fixing bugs in etl_score_geo * Repeating index for diesel fix * passing tests * adding pytest.ini Co-authored-by: Vim USDS <vimal.k.shah@omb.eop.gov> Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov> Co-authored-by: lucasmbrown-usds <lucas.m.brown@omb.eop.gov>
This commit is contained in:
parent
83eb7b0982
commit
9709d08ca3
13 changed files with 328 additions and 141 deletions
|
@ -159,7 +159,7 @@ We use Docker to install the necessary libraries in a container that can be run
|
|||
|
||||
To build the docker container the first time, make sure you're in the root directory of the repository and run `docker-compose build --no-cache`.
|
||||
|
||||
Once completed, run `docker-compose up`. Docker will spin up 3 containers: the client container, the static server container and the data container. Once all data is generated, you can see the application using a browser and navigating to `htto://localhost:8000`.
|
||||
Once completed, run `docker-compose up`. Docker will spin up 3 containers: the client container, the static server container and the data container. Once all data is generated, you can see the application using a browser and navigating to `http://localhost:8000`.
|
||||
|
||||
If you want to run specific data tasks, you can open a terminal window, navigate to the root folder for this repository and then execute any command for the application using this format:
|
||||
|
||||
|
@ -322,7 +322,7 @@ score_initial_df = pd.read_csv(score_csv_path, dtype={"GEOID10_TRACT": "string"}
|
|||
score_initial_df.to_csv(data_path / "data_pipeline" / "etl" / "score" / "tests" / "sample_data" /"score_data_initial.csv", index=False)
|
||||
```
|
||||
|
||||
Now you can move on to updating inidvidual pickles for the tests. Note that it is helpful to do them in this order:
|
||||
Now you can move on to updating individual pickles for the tests. Note that it is helpful to do them in this order:
|
||||
|
||||
We have four pickle files that correspond to expected files:
|
||||
- `score_data_expected.pkl`: Initial score without counties
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue