Combine + Tilefy (#806)

* init

* score-post

* added score csv s3 download; remore poetry cmds from readme

* working census tile fetch

* PR review

* Github Actions Work
This commit is contained in:
Jorge Escobar 2021-11-01 18:05:05 -04:00 committed by GitHub
parent 7b87e0ec99
commit 1b17af84c8
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 560 additions and 371 deletions

View file

@ -39,8 +39,8 @@ jobs:
pip install GDAL==3.2.3
- name: Run Scripts
run: |
poetry run download_census
poetry run etl_and_score
poetry run python3 data_pipeline/application.py geo-score -s aws
poetry run python3 data_pipeline/application.py generate-map-tiles -s aws
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
@ -49,14 +49,6 @@ jobs:
aws-region: us-east-1
- name: Deploy to Geoplatform AWS
run: |
aws s3 sync ./data_pipeline/data/dataset/ s3://justice40-data/data-pipeline/data/dataset --acl public-read --delete
aws s3 sync ./data_pipeline/data/score/csv/ s3://justice40-data/data-pipeline/data/score/csv --acl public-read --delete
aws s3 sync ./data_pipeline/data/score/downloadable/ s3://justice40-data/data-pipeline/data/score/downloadable --acl public-read --delete
- name: Update PR with Comment about deployment
uses: mshick/add-pr-comment@v1
with:
message: |
Data Synced! Find it here: s3://justice40-data/data-pipeline/data/
repo-token: ${{ secrets.GITHUB_TOKEN }}
repo-token-user-login: "github-actions[bot]" # The user.login for temporary GitHub tokens
allow-repeats: false # This is the default
aws s3 rm s3://justice40-data/data-pipeline/data/score/tiles --recursive
aws s3 cp ./data_pipeline/data/score/tiles/ s3://justice40-data/data-pipeline/data/score/tiles --recursive --acl public-read
aws s3 sync ./data_pipeline/data/score/geojson/ s3://justice40-data/data-pipeline/data/score/geojson --acl public-read --delete

View file

@ -33,7 +33,7 @@ jobs:
run: poetry install
- name: Run Scripts
run: |
poetry run python3 data_pipeline/application.py score-full-run
poetry run python3 data_pipeline/application.py score-full-run -s aws
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
@ -44,11 +44,3 @@ jobs:
run: |
aws s3 sync ./data_pipeline/data/score/csv/ s3://justice40-data/data-pipeline/data/score/csv --acl public-read --delete
aws s3 sync ./data_pipeline/data/score/downloadable/ s3://justice40-data/data-pipeline/data/score/downloadable --acl public-read --delete
- name: Update PR with Comment about deployment
uses: mshick/add-pr-comment@v1
with:
message: |
Data Synced! Find it here: s3://justice40-data/data-pipeline/data/score
repo-token: ${{ secrets.GITHUB_TOKEN }}
repo-token-user-login: 'github-actions[bot]' # The user.login for temporary GitHub tokens
allow-repeats: false # This is the default

View file

@ -33,4 +33,4 @@ COPY requirements.txt .
RUN pip3 install -r requirements.txt
RUN pip3 install .
CMD python3 -m data_pipeline.application data-full-run --check
CMD python3 -m data_pipeline.application data-full-run --check -s aws

View file

@ -104,18 +104,18 @@ TODO add mermaid diagram
1. Call the `etl-run` command using the application manager `application.py` **NOTE:** This may take several minutes to execute.
- With Docker: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application etl-run`
- With Poetry: `poetry run etl`
- With Poetry: `poetry run python3 data_pipeline/application.py etl-run`
2. This command will execute the corresponding ETL script for each data source in `data_pipeline/etl/sources/`. For example, `data_pipeline/etl/sources/ejscreen/etl.py` is the ETL script for EJSCREEN data.
3. Each ETL script will extract the data from its original source, then format the data into `.csv` files that get stored in the relevant folder in `data_pipeline/data/dataset/`. For example, HUD Housing data is stored in `data_pipeline/data/dataset/hud_housing/usa.csv`
_**NOTE:** You have the option to pass the name of a specific data source to the `etl-run` command using the `-d` flag, which will limit the execution of the ETL process to that specific data source._
_For example: `poetry run etl -d ejscreen` would only run the ETL process for EJSCREEN data._
_For example: `poetry run python3 data_pipeline/application.py etl-run -d ejscreen` would only run the ETL process for EJSCREEN data._
#### Step 3: Calculate the Justice40 score experiments
1. Call the `score-run` command using the application manager `application.py` **NOTE:** This may take several minutes to execute.
- With Docker: `docker run --rm -it -v ${PWD}/data/data-pipeline/data_pipeline/data:/data_pipeline/data j40_data_pipeline python3 -m data_pipeline.application score-run`
- With Poetry: `poetry run score`
- With Poetry: `poetry run python3 data_pipeline/application.py score-run`
1. The `score-run` command will execute the `etl/score/etl.py` script which loads the data from each of the source files added to the `data/dataset/` directory by the ETL scripts in Step 1.
1. These data sets are merged into a single dataframe using their Census Block Group GEOID as a common key, and the data in each of the columns is standardized in two ways:
- Their [percentile rank](https://en.wikipedia.org/wiki/Percentile_rank) is calculated, which tells us what percentage of other Census Block Groups have a lower value for that particular column.
@ -203,32 +203,58 @@ To install the above-named executables:
### Windows Users
If you want to run tile generation, please install TippeCanoe [following these instrcutions](https://github.com/GISupportICRC/ArcGIS2Mapbox#installing-tippecanoe-on-windows). You also need some pre-requisites for Geopandas as specified in the Poetry requirements. Please follow [these instructions](https://stackoverflow.com/questions/56958421/pip-install-geopandas-on-windows) to install the Geopandas dependency locally.
If you want to run tile generation, please install TippeCanoe [following these instructions](https://github.com/GISupportICRC/ArcGIS2Mapbox#installing-tippecanoe-on-windows). You also need some pre-requisites for Geopandas as specified in the Poetry requirements. Please follow [these instructions](https://stackoverflow.com/questions/56958421/pip-install-geopandas-on-windows) to install the Geopandas dependency locally. It's definitely easier if you have access to WSL (Windows Subsystem Linux), and install these packages using commands similar to our [Dockerfile](https://github.com/usds/justice40-tool/blob/main/data/data-pipeline/Dockerfile).
### Setting up Poetry
- Start a terminal
- Change to this directory (`/data/data-pipeline/`)
- Make sure you have Python 3.9 installed: `python -V` or `python3 -V`
- Make sure you have at least Python 3.7 installed: `python -V` or `python3 -V`
- We use [Poetry](https://python-poetry.org/) for managing dependencies and building the application. Please follow the instructions on their site to download.
- Install Poetry requirements with `poetry install`
### Downloading Census Block Groups GeoJSON and Generating CBG CSVs
### The Application entrypoint
- Make sure you have Docker running in your machine
After installing the poetry dependencies, you can see a list of commands with the following steps:
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline/`)
- If you want to clear out all data and tiles from all directories, you can run: `poetry run cleanup_data`.
- Then run `poetry run download_census`
Note: Census files are not kept in the repository and the download directories are ignored by Git
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- Then run `poetry run python3 data_pipeline/application.py --help`
### Downloading Census Block Groups GeoJSON and Generating CBG CSVs (not normally required)
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- If you want to clear out all data and tiles from all directories, you can run: `poetry run python3 data_pipeline/application.py data-cleanup`.
- Then run `poetry run python3 data_pipeline/application.py census-data-download`
Note: Census files are hosted in the Justice40 S3 and you can skip this step by passing the `-s aws` or `--data-source aws` flag in the scripts below
### Run all ETL, score and map generation processes
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- Then run `poetry run python3 data_pipeline/application.py data-full-run -s aws`
- Note: The `-s` flag is optional if you have generated/downloaded the census data
### Run both ETL and score generation processes
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- Then run `poetry run python3 data_pipeline/application.py score-full-run`
### Run all ETL processes
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- Then run `poetry run python3 data_pipeline/application.py etl-run`
### Generating Map Tiles
- Make sure you have Docker running in your machine
- Start a terminal
- Change to the package directory (i.e. `cd data/data-pipeline/data_pipeline`)
- Then run `poetry run generate_tiles`
- Then run `poetry run python3 data_pipeline/application.py generate-map-tiles -s aws`
- If you have S3 keys, you can sync to the dev repo by doing `aws s3 sync ./data_pipeline/data/score/tiles/ s3://justice40-data/data-pipeline/data/score/tiles --acl public-read --delete`
- Note: The `-s` flag is optional if you have generated/downloaded the score data
### Serve the map locally
@ -248,8 +274,7 @@ If you want to run tile generation, please install TippeCanoe [following these i
- Activate a Poetry Shell (see above)
- Run `jupyter contrib nbextension install --user`
- Run `jupyter nbextension enable python-markdown/main`
- Make sure you've loaded the Jupyter notebook in a "Trusted" state. (See button near
top right of Notebook screen.)
- Make sure you've loaded the Jupyter notebook in a "Trusted" state. (See button near top right of Notebook screen.)
For more information, see [nbextensions docs](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/install.html) and
see [python-markdown docs](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/python-markdown).

View file

@ -25,6 +25,8 @@ from data_pipeline.utils import (
logger = get_module_logger(__name__)
dataset_cli_help = "Grab the data from either 'local' for local access or 'aws' to retrieve from Justice40 S3 repository"
@click.group()
def cli():
@ -93,7 +95,13 @@ def census_data_download(zip_compress):
@cli.command(
help="Run all ETL processes or a specific one",
)
@click.option("-d", "--dataset", required=False, type=str)
@click.option(
"-d",
"--dataset",
required=False,
type=str,
help=dataset_cli_help,
)
def etl_run(dataset: str):
"""Run a specific or all ETL processes
@ -134,9 +142,16 @@ def score_full_run():
@cli.command(help="Generate Geojson files with scores baked in")
@click.option("-d", "--data-source", default="local", required=False, type=str)
@click.option(
"-s",
"--data-source",
default="local",
required=False,
type=str,
help=dataset_cli_help,
)
def geo_score(data_source: str):
"""CLI command to generate the score
"""CLI command to combine score with GeoJSON data and generate low and high files
Args:
data_source (str): Source for the census data (optional)
@ -166,11 +181,29 @@ def generate_map_tiles():
@cli.command(
help="Run etl_score_post to create score csv, tile csv, and downloadable zip",
)
def generate_score_post():
"""CLI command to generate score, tile, and downloadable files"""
@click.option(
"-s",
"--data-source",
default="local",
required=False,
type=str,
help=dataset_cli_help,
)
def generate_score_post(data_source: str):
"""CLI command to generate score, tile, and downloadable files
Args:
data_source (str): Source for the census data (optional)
Options:
- local: fetch census and score data from the local data directory
- aws: fetch census and score from AWS S3 J40 data repository
Returns:
None
"""
downloadable_cleanup()
score_post()
score_post(data_source)
sys.exit()
@ -183,11 +216,23 @@ def generate_score_post():
is_flag=True,
help="Check if data run has been run before, and don't run it if so.",
)
def data_full_run(check):
@click.option(
"-s",
"--data-source",
default="local",
required=False,
type=str,
help=dataset_cli_help,
)
def data_full_run(check: bool, data_source: str):
"""CLI command to run ETL, score, JSON combine and generate tiles in one command
Args:
check (bool): Run the full data run only if the first run sempahore file is not set (optional)
data_source (str): Source for the census data (optional)
Options:
- local: fetch census and score data from the local data directory
- aws: fetch census and score from AWS S3 J40 data repository
Returns:
None
@ -206,6 +251,7 @@ def data_full_run(check):
score_folder_cleanup()
temp_folder_cleanup()
if data_source == "local":
logger.info("*** Downloading census data")
etl_runner("census")
@ -215,8 +261,11 @@ def data_full_run(check):
logger.info("*** Generating Score")
score_generate()
logger.info("*** Running Post Score scripts")
score_post(data_source)
logger.info("*** Combining Score with Census Geojson")
score_geo()
score_geo(data_source)
logger.info("*** Generating Map Tiles")
generate_tiles(data_path)

View file

@ -87,17 +87,20 @@ def score_generate() -> None:
score_post()
def score_post() -> None:
def score_post(data_source: str = "local") -> None:
"""Posts the score files to the local directory
Args:
None
data_source (str): Source for the census data (optional)
Options:
- local (default): fetch census data from the local data directory
- aws: fetch census from AWS S3 J40 data repository
Returns:
None
"""
# Post Score Processing
score_post = PostScoreETL()
score_post = PostScoreETL(data_source=data_source)
score_post.extract()
score_post.transform()
score_post.load()
@ -108,7 +111,7 @@ def score_geo(data_source: str = "local") -> None:
"""Generates the geojson files with score data baked in
Args:
census_data_source (str): Source for the census data (optional)
data_source (str): Source for the census data (optional)
Options:
- local (default): fetch census data from the local data directory
- aws: fetch census from AWS S3 J40 data repository

View file

@ -6,6 +6,7 @@ from data_pipeline.etl.base import ExtractTransformLoad
from data_pipeline.etl.sources.census.etl_utils import (
check_census_data_source,
)
from data_pipeline.etl.score.etl_utils import check_score_data_source
from data_pipeline.utils import get_module_logger
logger = get_module_logger(__name__)
@ -17,6 +18,7 @@ class GeoScoreETL(ExtractTransformLoad):
"""
def __init__(self, data_source: str = None):
self.DATA_SOURCE = data_source
self.SCORE_GEOJSON_PATH = self.DATA_PATH / "score" / "geojson"
self.SCORE_LOW_GEOJSON = self.SCORE_GEOJSON_PATH / "usa-low.json"
self.SCORE_HIGH_GEOJSON = self.SCORE_GEOJSON_PATH / "usa-high.json"
@ -46,6 +48,12 @@ class GeoScoreETL(ExtractTransformLoad):
census_data_source=self.DATA_SOURCE,
)
# check score data
check_score_data_source(
score_csv_data_path=self.SCORE_CSV_PATH,
score_data_source=self.DATA_SOURCE,
)
logger.info("Reading US GeoJSON (~6 minutes)")
self.geojson_usa_df = gpd.read_file(
self.CENSUS_USA_GEOJSON,

View file

@ -3,6 +3,9 @@ import pandas as pd
from data_pipeline.etl.base import ExtractTransformLoad
from data_pipeline.utils import get_module_logger, zip_files
from data_pipeline.etl.sources.census.etl_utils import (
check_census_data_source,
)
from . import constants
logger = get_module_logger(__name__)
@ -14,7 +17,8 @@ class PostScoreETL(ExtractTransformLoad):
datasets.
"""
def __init__(self):
def __init__(self, data_source: str = None):
self.DATA_SOURCE = data_source
self.input_counties_df: pd.DataFrame
self.input_states_df: pd.DataFrame
self.input_score_df: pd.DataFrame
@ -66,6 +70,13 @@ class PostScoreETL(ExtractTransformLoad):
def extract(self) -> None:
logger.info("Starting Extraction")
# check census data
check_census_data_source(
census_data_path=self.DATA_PATH / "census",
census_data_source=self.DATA_SOURCE,
)
super().extract(
constants.CENSUS_COUNTIES_ZIP_URL,
constants.TMP_PATH,

View file

@ -0,0 +1,50 @@
import os
import sys
from pathlib import Path
from data_pipeline.config import settings
from data_pipeline.utils import (
download_file_from_url,
get_module_logger,
)
logger = get_module_logger(__name__)
def check_score_data_source(
score_csv_data_path: Path,
score_data_source: str,
) -> None:
"""Checks if census data is present, and exits gracefully if it doesn't exist. It will download it from S3
if census_data_source is set to "aws"
Args:
score_csv_data_path (str): Path for local Score CSV data
score_data_source (str): Source for the score data
Options:
- local: fetch census data from the local data directory
- aws: fetch census from AWS S3 J40 data repository
Returns:
None
"""
TILE_SCORE_CSV_S3_URL = (
settings.AWS_JUSTICE40_DATAPIPELINE_URL
+ "/data/score/csv/tiles/usa.csv"
)
TILE_SCORE_CSV = score_csv_data_path / "tiles" / "usa.csv"
# download from s3 if census_data_source is aws
if score_data_source == "aws":
logger.info("Fetching Score Tile data from AWS S3")
download_file_from_url(
file_url=TILE_SCORE_CSV_S3_URL, download_file_name=TILE_SCORE_CSV
)
else:
# check if score data is found locally
if not os.path.isfile(TILE_SCORE_CSV):
logger.info(
"No local score tiles data found. Please use '-d aws` to fetch from AWS"
)
sys.exit()

View file

@ -107,13 +107,13 @@ def check_census_data_source(
# check if census data is found locally
if not os.path.isfile(census_data_path / "geojson" / "us.json"):
logger.info(
"No local census data found. Please use '-cds aws` to fetch from AWS"
"No local census data found. Please use '-d aws` to fetch from AWS"
)
sys.exit()
def zip_census_data():
logger.info("Compressing and uploading census files to AWS S3")
logger.info("Compressing census files to data/tmp folder")
CENSUS_DATA_PATH = settings.APP_ROOT / "data" / "census"
TMP_PATH = settings.APP_ROOT / "data" / "tmp"

File diff suppressed because it is too large Load diff

View file

@ -114,13 +114,3 @@ authorized_licenses = [
"gpl v3",
"historical permission notice and disclaimer (hpnd)",
]
[tool.poetry.scripts]
cleanup_census = 'data_pipeline.application:census_cleanup'
cleanup_data = 'data_pipeline.application:data_cleanup'
download_census = 'data_pipeline.application:census_data_download'
etl = 'data_pipeline.application:etl_run'
generate_tiles = 'data_pipeline.application:generate_map_tiles'
score = 'data_pipeline.application:score_run'
score_geo = 'data_pipeline.application:geo_score'
etl_and_score = 'data_pipeline.application:score_full_run'

View file

@ -1,5 +1,6 @@
[default]
AWS_JUSTICE40_DATASOURCES_URL = "https://justice40-data.s3.amazonaws.com/data-sources"
AWS_JUSTICE40_DATAPIPELINE_URL = "https://justice40-data.s3.amazonaws.com/data-pipeline"
[development]