* working notebook
* updating notebook
* wip
* fixing broken tests
* adding tribal overlap files
* WIP
* WIP
* WIP, calculated count and names
* working
* partial cleanup
* partial cleanup
* updating field names
* fixing bug
* removing pyogrio
* removing unused imports
* updating test fixtures to be more realistic
* cleaning up notebook
* fixing black
* fixing flake8 errors
* adding tox instructions
* updating etl_score
* suppressing warning
* Use projected CRSes, ignore geom types (#1900)
I looked into this a bit, and in general the geometry type mismatch
changes very little about the calculation; we have a mix of
multipolygons and polygons. The fastest thing to do is just not keep
geom type; I did some runs with it set to both True and False, and
they're the same within 9 digits of precision. Logically we just want to
overlaps, regardless of how the actual geometries are encoded between
the frames, so we can in this case ignore the geom types and feel OKAY.
I also moved to projected CRSes, since we are actually trying to do area
calculations and so like, we should. Again, the change is small in
magnitude but logically more sound.
* Readd CDC dataset config (#1900)
* adding comments to fips code
* delete unnecessary loggers
Co-authored-by: matt bowen <matthew.r.bowen@omb.eop.gov>
* Add notebook to generate test data (#1780)
* Add Abandoned Mine Land data (#1780)
Using a similar structure but simpler apporach compared to FUDs, add an
indicator for whether a tract has an abandonded mine.
* Adding some detail to dataset readmes
Just a thought!
* Apply feedback from revieiw (#1780)
* Fixup bad string that broke test (#1780)
* Update a string that I should have renamed (#1780)
* Reduce number of threads to reduce memory pressure (#1780)
* Try not running geo data (#1780)
* Run the high-memory sets separately (#1780)
* Actually deduplicate (#1780)
* Add flag for memory intensive ETLs (#1780)
* Document new flag for datasets (#1780)
* Add flag for new datasets fro rebase (#1780)
Co-authored-by: Emma Nechamkin <97977170+emma-nechamkin@users.noreply.github.com>
* starting tribal pr
* further pipeline work
* bia merge working
* alaska villages and tribal geo generate
* tribal folders
* adding data full run
* tile generation
* tribal tile deploy
* WIP on parallelizing
* switching to get_tmp_path for nri
* switching to get_tmp_path everywhere necessary
* fixing linter errors
* moving heavy ETLs to front of line
* add hold
* moving cdc places up
* removing unnecessary print
* moving h&t up
* adding parallel to geo post
* better census labels
* switching to concurrent futures
* fixing output
* Adds dev dependencies to requirements.txt and re-runs black on codebase
* Adds test and code for national risk index etl, still in progress
* Removes test_data from .gitignore
* Adds test data to nation_risk_index tests
* Creates tests and ETL class for NRI data
* Adds tests for load() and transform() methods of NationalRiskIndexETL
* Updates README.md with info about the NRI dataset
* Adds to dos
* Moves tests and test data into a tests/ dir in national_risk_index
* Moves tmp_dir for tests into data/tmp/tests/
* Promotes fixtures to conftest and relocates national_risk_index tests:
The relocation of national_risk_index tests is necessary because tests
can only use fixtures specified in conftests within the same package
* Fixes issue with df.equals() in test_transform()
* Files reformatted by black
* Commit changes to other files after re-running black
* Fixes unused import that caused lint checks to fail
* Moves tests/ directory to app root for data_pipeline
* Adds new methods to ExtractTransformLoad base class:
- __init__() Initializes class attributes
- _get_census_fips_codes() Loads a dataframe with the fips codes for
census block group and tract
- validate_init() Checks that the class was initialized correctly
- validate_output() Checks that the output was loaded correctly
* Adds test for ExtractTransformLoad.__init__() and base.py
* Fixes failing flake8 test
* Changes geo_col to geoid_col and changes is_dataset to is_census in yaml
* Adds test for validate_output()
* Adds remaining tests
* Removes is_dataset from init method
* Makes CENSUS_CSV a class attribute instead of a class global:
This ensures that CENSUS_CSV is only set when the ETL class is for a
non-census dataset and removes the need to overwrite the value in
mock_etl fixture
* Re-formats files with black and fixes broken tox tests
* Change downloadable file names
* Remove constants because we're dynamically creating these
* Update to "communities" for the descriptor word based on team convo
* Add timestamp in 2020-09-20-0930 format because I personally think
this is the best ^.^
* Add a CLI command to run ETL Score Post so that we don't have to
run the score generation just to get new downloadable files.
* Also make sure the old downloadable files are cleaned up on the
run of this command.
* Remove unused library, thanks pylint!
Co-authored-by: Shelby Switzer <shelby.switzer@cms.hhs.gov>
Error this addresses:
File "/Users/lucas/Documents/usds/repos/justice40-tool/data/data-pipeline/data_pipeline/etl/runner.py", line 71, in etl_runner
f"data_pipeline.etl.sources.{dataset['module_dir']}.etl"
TypeError: 'NoneType' object is not subscriptable
* Fixes#456 - Our data directory should adopt standard python package structure
* a few missed references
* updating readme
* updating requirements
* Running Black
* Fixes for flake8
* updating pylint
2021-08-05 15:35:54 -04:00
Renamed from data/data-pipeline/etl/runner.py (Browse further)