Lab 4 update
This commit is contained in:
parent
4d438cfa89
commit
bb50715f28
1 changed files with 238 additions and 418 deletions
|
@ -4,14 +4,14 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# <p style=\"text-align: center;\">PHYS 134L Spring 2022 Lab 3</p>"
|
||||
"# <p style=\"text-align: center;\">PHYS 134L Spring 2022 Lab 4</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-block alert-danger\"><b>Due date:</b> Sunday, April 24th, 2022 by 11:59pm, submitted through Gradescope.</div>"
|
||||
"<div class=\"alert alert-block alert-danger\"><b>Due date:</b> Sunday, May 1st, 2022 by 11:59pm, submitted through Gradescope.</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -32,9 +32,9 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Read through this entire lab before you start. In this lab, we will look at the brightness of stars as measured in astronomical units called ''magnitudes,'', you'll continue to think about how telescopes form images and how that relates to the apparent sizes of stars in our images. You'll also start thinking about how the background brightness of sky affects our images, you'll continue working with photon noise, and we'll start thinking about \"systematic\" errors. \n",
|
||||
"The previous lab explored the systematic differences that result from doing a measurement in two slightly different ways. Next, we will try to replicate the measurements made by Source Extractor using ```Astropy``` packages. The main new package we will use is called ```photutils```, an astropy package used for photometry--the measurement of the brightness of astronomical objects. You can find all the documentation for ```photutils``` [here.](https://photutils.readthedocs.io/en/stable/)\n",
|
||||
"\n",
|
||||
"To complete this lab you should have already read textbook Chapter 3 and Sections 2.3 and 2.4. In this lab, you will be asked to type out some equations. In a jupyter notebook you can do this using standard LATEX math notation. If you're new to LATEX you can use [this online equation editor](https://latex.codecogs.com/) to help you along to start. Later in this course you'll be using Latex as well, so now is a good time to start learning!"
|
||||
"We'll again be using ```cluster.fits``` from our previous labs. "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -42,14 +42,28 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## <p style=\"text-align: center;\">Part 1: Astronomical Magnitudes</p>"
|
||||
"## <p style=\"text-align: center;\">Part 1: Background Estimation</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**We'll begin by making a two-dimensional plot of ```cluster1.fits``` and overplotting the locations of the stars given in ```cluster1.cat.```** Refer to the first lab if you have forgotten how to do this. As in that lab, please use a scale that shows the full dynamic range of the image and that looks nice: no garish color schemes, and few (if any) pixels beyond the limits of your color scale. The $x$ and $y$ pixel locations might be in different locations than they were in ```object.cat``` from Lab 1, so please check and make sure that you are reading the correct column. You may open ```cluster1.cat``` in a text editor (like notepad) or on JupyterHub to double-check the contents of each column.\n"
|
||||
"The first step will be determining the background level, so that we don't overestimate the amount of light coming from any given star. We can do this in several ways. First, open ```cluster.fits``` in DS9. **Hover your mouse around the background and try to estimate by eye what you think the background level is, enter it here:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Another way to estimate the background here is by looking at the median of the image. **Load the fits file into a new jupyter notebook and get the median value of the whole array.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -65,7 +79,21 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The file ```cluster1.cat``` has many columns besides the $x$ and $y$ positions. We will now look at the information in the columns ```MAG_ISOCOR``` and ```MAGERR_ISOCOR```. **First, make a scatter plot showing ```MAG_ISOCOR``` as a function of ```NUMBER```, i.e., simply plot the magnitudes as a function of the order in which they appear in the catalog. Overplot the error bars given by ```MAGERR_ISOCOR```.** Check the documentation for ```pyplot.errorbar``` if you are unsure of how to get started."
|
||||
"**What kind of probablems can you think that might make a simple median a poor estimate?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Read the photutils background estimation [documentation](https://photutils.readthedocs.io/en/stable/background.html) to determine the best method to use on this data. Play with image scaling and colorbars to help determine if a 2D background estimation is needed. Describe below your final decision on the best method here and why"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -77,165 +105,6 @@
|
|||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It's hard to interpret that previous plot; it looks like a bit of a mess. To make it easier to see what is going on, let's sort the stars by magnitude. You'll want to sort the magnitudes and their uncertainties in the same way. In other words, you want the same uncertainty to remain associated with each measured magnitude. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Assume that you have read in the catalog file and set an array called ```mags``` equal to the column ```MAG_ISOCOR``` and an array called ```mag_errs``` equal to the column ```MAGERR_ISOCOR```. Explain why the following code will produce nonsense if you use the array ```mag_errs_sorted``` for the error bars of ```mags_sorted```:\n",
|
||||
"```\n",
|
||||
"mags_sorted = np.sort(mags)\n",
|
||||
"mag_errs_sorted = np.sort(mag_errs)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To sort the arrays in the same way, try ```np.argsort```. Experiment yourself a bit to get it working, and read the documentation page. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Now remake your plot of magnitudes with error bars, but sorted by magnitude on the horizontal axis.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Do the estimated errors varay with magnitude in a systematic way? Explain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So far we have used catalogs output by Source Extractor to obtain only the most minimal information about the objects in the detector field of view--the positions and fluxes from each detected object. We will now look at Source Extractor output files for which we have asked the program to do a more thorough job. Typically we want more kinds of information about each object for one of three reasons: to represent the data in a more convenient way, to display the result of a different way to estimate a number (such as the flux) for which we already have a value, or to give information about the reliability of the results displayed in other columns.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We will continue to work with the catalog given by ```cluster1.cat```, but now will load the columns corresponding to ```FLUX_ISOCOR``` and ```FLUXERR_ISOCOR```. Open the catalog file in a text editor like notepad if you cannot find them. As usual, remember that Source Extractor counts from 1, while python indexes from 0. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we will check the relationship between flux and magnitude, if you need a reminder, check out Section 3.2 of the Burns. **Make a scatter plot with ```MAG_ISOCOR``` on the $x$-axis and ```FLUX_ISOCOR``` on the $y$-axis. Once you have done this, adjust the vertical axis to have a logarithmic scaling (check ```pyplot.yscale```).** "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Once you have this scatter plot, try fitting a line by eye. The mechanics of how to actually do this properly are beyond the scope of this class. For now, please define a variable $x$ by\n",
|
||||
"```\n",
|
||||
"x = np.linspace(np.amin(mags,np.amax(mags)),10)\n",
|
||||
"```\n",
|
||||
"and a variable $y$ by\n",
|
||||
"```\n",
|
||||
"y = a*x + b\n",
|
||||
"```\n",
|
||||
"where you determine the best-fit ceofficients ```a``` and ```b``` by eye. Depending on how you have implemented your logarithmic vertical scale you may need to plot either \n",
|
||||
"```\n",
|
||||
"y = np.exp(a*x+b)\n",
|
||||
"```\n",
|
||||
"or \n",
|
||||
"```\n",
|
||||
"y = 10**(a*x+b)\n",
|
||||
"```\n",
|
||||
"for it to appear as a line on your plot. **Please overplot your best-fit line on the scatter plot of magnitudes against logarithmic flux.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#You code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Write down the definition of astronomical magnitudes in the space below.**\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**How closely does your fitted constant ```a``` match expectations?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Approximately what magnitude would yield a flux of 1 given your values of ```a``` and ```b```? Please show your work.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -248,147 +117,16 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## <p style=\"text-align: center;\">Part 2: The Sizes of (Images of) Stars</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The catalog file has a column entitled ```FWHM_IMAGE``` (meaning Full Width at Half Maximum of the star image). This parameter is a measure of the width of the star images in units of pixels -- smaller numbers mean sharper images. You can also estimate these values yourself by looking at the image in ds9. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Open the image in ds9, set the “scale” option to “zscale,” and estimate the size of the images\n",
|
||||
"of stars (in pixels). How do these sizes compare with the FWHM IMAGE values in the catalog\n",
|
||||
"file?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Now try setting the ds9 ``scale'' to ``min/max'' and ``linear'' and estimate the sizes of a few of the star images (necessarily the brightest ones, since those are all you can see this way). Use the cursor to read the pixel values. How do the sizes you estimate in this way compare to the FWHM_IMAGE values in the catalog file?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Set the ``Horizontal Graph'' option in the ``View'' tab, and estimate the FWHM of a bright star from this graph. How does this estimate compare with the other two estimates you gave above?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Last lab, you estimated the angular sizes of a few stars; the largest size you should have computed was around 2~mas. You also computed the pixel scale in these images; you should have gotten a scale of around $0.\\!\\!^{\\prime \\prime}58$ arcsec/pixel. **Use these two numbers to estimate the angular size of a star in units of pixels. Please show your work for the unit conversion.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The angular size of a star (in units of pixels) divided by the image FWHM gives a dimensionless number. If this number is larger than 1, the star is said to be resolved: a larger star makes a larger image. If this number is much less than 1, the star is said to be unresolved, because the size of the star on the image is set almost entirely by something other than its physical size. **What is this dimensionless number, angular size/FWHM, for the current image? Roughly how small would the FWHM have to be for any of the stars to be resolved at all?**\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your anwer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**To see if all stars have the same size on the image, make a plot of FWHM ($y$-axis) against stellar magnitude ($x$-axis). As usual, please label your axes and make the plot look nice. Do you see evidence for a dependence of FWHM on magnitude?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For telescopes on the ground, the FWHM is usually set by atmospheric turbulence and is called ''seeing.'' A better site gives a shaper image. \n"
|
||||
"## <p style=\"text-align: center;\">Part 2: Source Identification</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we've settled on a method to extract the background we can move on to automatically detecting the sources. ```photutils``` has two options for this: ```DAOStarFinder``` and ```IRAFStarFinder```. See the [documentation here](https://photutils.readthedocs.io/en/stable/detection.html). We'll stick with ```DAOStarFinder```, but feel free to read up both methods and experiment with them. \n",
|
||||
"\n",
|
||||
"## <p style=\"text-align: center;\">Part 3: Backgrounds and Photon Noise</p>\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The ```BACKGROUND``` column in ```cluster1.cat``` contains estimates of the sky background flux (per pixel) in the neighborhood of each identified object. **Make a scatter plot of the sky background as a function of ```X_IMAGE```, using any plot symbols you like, but without lines connecting the points.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"What happens if you do use lines to connect the data points? **Explain what the program is doing to generate this unusable plot.**"
|
||||
"**Follow the example in the documentation and generate a list of detected sources.** There are several knobs you can easily turn to change your results: the FWHM, threshhold (i.e. how many ```stds``` above the background a source needs to be before being considered real) and how the ```std``` is calculated. Play with these until you are satisfied. Try to maximize the number of detected sources. **Display the cluster image and overplot the locations of the stars that you found to confirm that things are behaving as you expect.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -411,16 +149,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**What is a typical value for the sky background? What is the approximate scatter in the reported values?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
"You'll notice some obviously bright stars don't get identified. Go look at the image in DS9. Do the unidentified stars have anything in common? "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -434,7 +163,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Within Python you can calculate simple statistics (min, max, mean, standard deviation) for data sets using ```numpy``` functions. Ordinarily, your visual impression of the typical value of a bunch of plotted data points will closely approximate the mean value, and your visual estimate of the peak-to-peak scatter (ignoring rare outliers) will be roughly 4 times the standard deviation. How do mean and standard deviation computed with Python compare with the visual estimates you just made?"
|
||||
"Do these same stars get picked up by the Source Extractor? If so, how reliable do you think their aperture photometry will be?"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -443,7 +172,7 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
"#Your code here (if need be)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -453,22 +182,35 @@
|
|||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Source Extractor may have methods to compensate for these kinds of issues (such as the \"FLAG\" column), but this kind of thing should raise your suspicion and motivate you to read the documentation to find out. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"**Now, make a scatter plot ```MAGERR_ISOCOR``` as a function of ```MAG_ISOCOR```.** ```MAGERR_ISOCOR``` is an estimate of the error in ```MAG_ISOCOR```, based on square-root-rule counting error for the number of detected photons in the object plus the underlying sky background (derived from the Poisson distribution). The total number of sky+object photons is\n",
|
||||
"$$\n",
|
||||
"N_{\\rm phot} = {\\tt GAIN} \\cdot ({\\tt FLUX\\_ISOCOR} + {\\tt BACKGROUND} \\cdot N_{\\rm pix})\n",
|
||||
"$$\n",
|
||||
"Where ````GAIN```` is the number of detected photo-electrons per signal count (2.36 $e^-$/count for this image, determined from the ```EGAIN```\n",
|
||||
"keyword in the FITS header). $N_{\\rm pix}$ is the number of pixels occupied by the object image. With ```ISOCOR``` photometry this number is hard to know for sure, but a reasonable guess is $\\pi ({\\rm FWHM}/2)^2$. **Use this guess, the ```GAIN``` value just given, and your estimate of ```BACKGROUND``` to write an expression for $N_{\\rm phot}$ as a function of ```FLUX_ISOCOR```.**"
|
||||
"## <p style=\"text-align: center;\">Part 3: Aperture Photometry</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we're moving on to extracting the photometry. Here we'll be carrying out photometry using a technique called \"aperture photometry\", where we simply add up all the counts within a circular aperture. See the ```photutils``` documentation [here](https://photutils.readthedocs.io/en/stable/aperture.html). **Read up on the documentation and try it out yourself.** There are other types of photometry, such as \"weighted PSF\" photometry, that may be useful in your final project. \n",
|
||||
"\n",
|
||||
"To get started quickly you can ignore some of these sections in the documentation: sky apertures, pixel masking, aperture masks, aperture photometry using Sky Coordinates. \n",
|
||||
"\n",
|
||||
"**Extract photometry using circular apertures for all the sources you found using the DAOStarFinder. Don't forget to subtract off the background (using whatever method you settled on earlier) before calculating the photometry. What do you think the radius of your circular apertures should be and why?**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
@ -486,106 +228,35 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Now use the flux-vs-magnitude expression from page 2 to write an expression for $N_{\\rm phot}$ as a function of ```MAG_ISOCOR```.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The expected counting error (measured in photo-electrons) is the square root of $N_{\\rm phot}$. To get the noise in units of counts, we must\n",
|
||||
"divide by the ```GAIN```. **Write an expression for this counting error as a function of MAG_ISOCOR}.**\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now the tricky part. **What error (in magnitudes) is implied by this error in the measured value of $N_{\\rm phot}$?** Assume that the error is small\n",
|
||||
"compared to $N_{\\rm phot}$ itself (i.e., that $N_{\\rm phot} \\ll 1$), and show that if\n",
|
||||
"$$\n",
|
||||
"{\\rm mag} = m_1 - 2.5 \\log_{10}({\\rm flux})\n",
|
||||
"$$\n",
|
||||
"(where $m_1$ is the magnitude that yields ${\\rm flux} = 1$), then\n",
|
||||
"$$\n",
|
||||
"\\delta {\\rm mag} \\approx 1.086 \\frac{\\delta {\\rm flux}}{\\rm flux}.\n",
|
||||
"$$\n",
|
||||
"Derive this expression below. You may use standard propagation of uncertainties and take the square root."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your equation here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Now combine this expression and the one for counting error to write an expression for the expected error in magnitudes as a function of the object magnitude.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your equation here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Finally, define integer magnitudes covering the range of magnitudes in ```cluster1.cat```:**\n",
|
||||
"\n",
|
||||
"```x = np.arange(int(np.amin(mags)), int(np.amax(mags)) + 1)```\n",
|
||||
"\n",
|
||||
"**and evaluate your expression for magnitude uncertainties at these integer ```x``` values. Overplot this on your plot of ```MAGERR_ISOCOR``` vs.~```MAG_ISOCOR``` using a thick, red line.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## <p style=\"text-align: center;\">Part 4: Consistency and Systematic Errors</p>\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we know how to use Source Extractor output to estimate the fluxes and magnitudes of stars, and also the precision that simple physics says we should be achieving. But how well do we know our errors, really? Might effects other than photon counting statistics be dominant? And what about systematic errors, which give us consistent and repeatable wrong answers? How can we test for these?\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Source Extractor can estimate source fluxes in several ways – two of these are isophotal photometry (which deals well with objects having funny shapes) and aperture photometry (which works well for perfectly round objects, as star images are supposed to be). Check section 7.4 of *Source Extractor for Dummies* for an explanation of what these things mean. The picture on p.~41 is particularly helpful.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Since there are different ways to estimate what ought to be the same quantities, let’s see if we get the same answers with aperture photometry as we did with isophotal photometry. If not, the differences tell us something about our errors. \n",
|
||||
"\n",
|
||||
"Make a figure with two graphs, one on top of the other. For this task, you'll want to use subplots sharing the $x$-axis (read the documentation page for ```pyplot.subplots```). On the top axes, plot ```MAG_ISOCOR``` on the $x$-axis and ```MAG_APER``` on the $y$-axis. You should expect a tight but not perfect correlation between the two measurements, with a few dramatic outliers. The wide range in plotted magnitudes makes it hard to see the errors. **On the lower graph plot ```MAG_ISOCOR``` on the $x$-axis and, on the $y$-axis, plot (```MAG_ISOCOR – MAG_APER```). Choose your y-axis plot range carefully, to show the most information.\n",
|
||||
"If necessary, sacrifice a few outliers to show the typical scatter better.**"
|
||||
"**Match your star list with the Source Extractor star list (```cluster1.cat```) by matching the closest x and y positions from the two catalogs. Plot your newly extracted photometry against the ```FLUX_APER``` value from Source Extractor. If they don't match, try to explain a few reasons you think this might be the case.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your Code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Play with the various input parameters to the photometric extraction funtion until you get the two catalogs to match as well as you can. One key parameter you can change is the aperture radius, especially if the background isn't sufficiently subtracted. What radius did you have to choose to match the two catalogs? "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
@ -596,7 +267,25 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"What is the peak-to-peak scatter in the difference (```MAG_ISOCOR - MAG_APER```), if you ignore extreme outliers? How does this compare with the tabulated uncertainties ```MAGERR_ISOCOR``` and ```MAGERR_APER```? What do you conclude from this?"
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If changing the radius has such an important impact on the final answer, how does one choose an optimal radius? \n",
|
||||
"\n",
|
||||
"One way to do this is by optimizing the radius to maximize the signal to noise ratio (SNR) in your extracted photometry. To do this we need to estimate the noise in our apertures. The simplest way to do this is to pass to the ```aperture_photometry``` function a map of the expected noise in our image. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To calculate the noise, we'll first assume that the entire image is limited by the photon noise from our stars and background. This might not be strictly true, as the fainter parts of the image might be limited by the read-out noise of the detector. \n",
|
||||
"\n",
|
||||
"To create the error map, you'll want to manipulate the original data array, before background subtraction. Discuss with your partner why you want to do this before background subtraction. To calculate the error at each pixel, consider the value at that pixel to be the mean of a Poisson distribution, and the error at that pixel will be the square root of the variance of that distribution. Keep in mind that the Poisson distribution applies to the number of *photons* the detector sees, not the number of counts in your image (recall the previous lab's discussion of the detector gain). **Write down the equation for calculating the error in each pixel here:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -610,7 +299,32 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Identify the stars corresponding to 3 of the most extreme outliers in your upper plot, and look at them using ds9. Can you describe the potential source(s) of these systematic errors?"
|
||||
"Using that equation create an error map for the whole data array and pass that into the ```aperture_photometry``` function so that you can get an estimate of the flux error on each star. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Pick 3 random stars from your list and make plots of the SNR as a function of aperture radius (i.e. the aperture sum divided by its error). At what radius do these stars maximize their SNRs?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -619,6 +333,112 @@
|
|||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If they are different, why do you think that is the case?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**For each star calculate its optimal aperture sum and error by maximizing the signal to noise. Convert this to a magnitude as: $$m = -2.5log_{10}(N_{aperture\\_sum}),$$ where $N_{aperture\\_sum}$ is the optimal aperture sum. For each magnitude calculate an error.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## <p style=\"text-align: center;\">Part 4: Photometric Calibration</p>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Leaving aside the issues of error and uncertainty, we still have to put our measured magnitudes on a standard scale, such that the numbers we derive can be directly compared with other work. This comes down to measuring the constant $m_1$:\n",
|
||||
"${\\rm mag} = m_1 - 2.5 \\log_{10}({\\rm flux}),$\n",
|
||||
"so that it is reasonably consistent with other work. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Use the website [sky-map](http://www.sky-map.org/) to determine the magnitudes of the same 4 stars that you used for the image-scale calculation in the previous lab. Make a table below that shows the star number from the Figure in Lab 3, its magnitude from *sky-map*, and (by matching $\\{x,y\\}$ coordinates) the magnitude that you calculated with ```photutils```. Calculate the average difference between your magnitudes and those from *sky-map*, and the standard deviation of this difference:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Your code here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Star| sky-map Mag | ```photutils``` Mag | difference \n",
|
||||
"---|---|---|---\n",
|
||||
"1 |---|---|---\n",
|
||||
"2 |---|---|---\n",
|
||||
"3 |---|---|---\n",
|
||||
"4 |---|---|---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"*Your answer here*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Write an expression for magnitudes in the *sky-map* system (assumed here to be the true on-sky magnitudes) in terms of your derived magnitudes.** This expression will convert your measured magnitudes (based on image counts) into on-sky magnitudes. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Assume the brightest of the stars in your list of 4 has an **absolute** magnitude of 2.75. What is its distance (in parsecs) based on its apparent magnitude? What uncertainty do you assign to this distance estimate, given only the uncertainty that you just calculated in estimating *sky-map* magnitudes from the magnitudes you calculated? Justify your answers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
@ -637,7 +457,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.8"
|
||||
"version": "3.7.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
Loading…
Reference in a new issue