Thursday, December 10, 2015

Lab 8: Spectral Signature Analysis

Goal and Background:
The purpose of this lab is to gain experience collecting and analyzing spectral signatures. In order to do this, 12 spectral signatures of different surface features were collected by creating AOIs with the polygon drawing tool in Erdas and then graphing their curves.

Methods:
There was one part of this lab where I was given 12 different features and I needed to obtain the spectral signatures of those features through the use of a Landsat EMT+ image. The 12 features include:

1. Standing Water
2. Moving water
3. Vegetation
4. Riparian vegetation.
5. Crops
6. Urban Grass
7. Dry soil (uncultivated)
8. Moist soil (uncultivated)
9. Rock
10. Asphalt highway
11. Airport runway
12. Concrete surface (Parking lot)

In order to find the spectral signatures, I had to figure out how to interpret each feature and then zoom into those features throughout the image. I used Google Earth to help distinguish some of the features such as soil and rock. After I used the polygon tool to make AOIs, I used the supervise tool to select the signature editor. I then evaluated the values in a table and then graphed them based on their band signatures individually and then all of them together to observe trends and differences.

Results:
Fig. 1: I first observed the graph for standing water of Lake Wissota to see the signature reflectance.

Fig. 2: I then observed the differences between wet and dry soil based on reflectance of different bands/wavelengths.
Fig. 3: Next I looked at all of the spectral bands together to observe similarities and differences between the various features.

The following table denotes both the bands of highest and lowest reflectance for each feature I observed:

                                                        Highest                 Lowest
Moving Water:                                  Band 1                  Band 4 or 6
Vegetation:                                        Band 4                  Band 3 or 6
Riparian Vegetation:                        Band 4                  Band 6
Crops:                                                 Band 4                  Band 6
Urban Grass:                                     Band 4                  Band 3 or 6
Dry Soil (Uncultivated):                   Band 5                  Band 3
Wet Soil (Uncultivated):                  Band 5                  Band 6
Rock:                                                   Band 5                  Band 4
Asphalt Highway:                              Band 5                  Band 4 or 6
Airport Runway:                                Band 5                  Band 4
Concrete Surface (Parking Lot):      Band 1                  Band 4




Band wavelengths (micrometers)
Band 1 (Blue): 0.45-0.52
Band 2 (Green): 0.52-0.60
Band 3 (Red): 0.63-0.69
Band 4 (NIR): 0.77-0.90
Band 5 (Short-wave Infrared): 1.55-1.75
Band 6 (Thermal Infrared): 10.40-12.50


The spectral signatures for both standing and moving water are significantly different because many of the other features reflect infrared in one way or another but water tends to absorb infrared radiation. The spectral difference for the runway is also quite different, I expected it to kind of be similar to the concrete parking lot signature though which is definitely not the case. The runway reflects a large amount of radiation in all bands because it is a white flat surface, which is true of concrete so I am curious why the concrete surface did not have a similar signature. All of the plant features more or less follow the same trend in that they reflect a large amount of NIR and absorb quite a bit of red light. I did find it interesting that the runway reflected such large amounts of radiation in all of the bands except for NIR where it had a large dip.


Sources:
Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Wednesday, December 9, 2015

Lab 7: Photogrammetry

Goal and Background:
The goal of this lab was to become familiar with stereoscopy and orthorectification tasks on satellite and aerial images. These tasks were accomplished understanding the mathematical models used to calculate photographic scales, the measurement of areas and perimeters of features, and the calculating of relief displacement.

Methods:

PART 1 - Scales, Measurements, and Relief Displacement

Section 1: Calculating Scale of Nearly Vertical Aerial Photographs
I was given two different, nearly vertical images and my first task was to determine the distance between two points which I would then use to calculate the scale. I was given the ground distance of 8824.47ft between point A and point B. I then measured the distance between the two points on an aerial image of the AOI on my screen and determined the scale through:

 2.7 inches on the photo, 8824.47ft in real world 
2.7in/(8824.47ft*12in) 
2.7in=105,869.64in
1in=39,210.9778in
Scale = 1:40,000

I was then given an image taken by a high altitude reconnaissance aircraft of Eau Claire County. The photograph was acquired at an altitude of 20,000 feet above sea level and had a camera focal length lens of 152mm. I was told that the elevation of Eau Claire County is 796ft above sea level. I then determined the scale of this photograph with the following equation:

S = f/(H/h)
f = 152mm
H = 20,000ft
h = 796

S = 152mm/(20,000ft - 796ft)
S = 5.98in/(240,000in - 9552in)
S = 1in/38536.45in
Scale = 1:39,000

Section 2: Measurement of Areas of Features on Aerial Photographs
I used the 'Measure Perimeters and Areas' digitizing tool to draw a polygon around a feature (pond) on the image. When I finished digitizing, I was given the feature's perimeter and area in different units.

Section 3: Calculating Relief Displacement From Object Height
Here I had to figure out the relief displacement of a smoke stack in an aerial image. I was given the height of the aerial camera above the datum (3980ft), the scale of the aerial photograph (1:3209), and I measured the distance from the principal point to the object, which I used to mathematically figure out the relief displacement.

PART 2 - Stereoscopy
This part of the lab focused on creating and analyzing an anaglyph image. I brought in an image to Erdas with a 1 meter spatial resolution and a DEM with a 10 meter spatial resolution. I then used the 'Anaglyph' tool to input the images. I increased the vertical exaggeration to 2 and ran the model which created a new anaglyph image which I was able to analyze with the use of polaroid glasses.

PART 3 - Orthorectification
The goal of this part of the lab was to become familiar with Erdas Imagine Lecia Photogrammetric Suite (LPS), which is used for triangulation, orthorectification of images collected by numerous sensors, etc. With the use of LPS, images could be orthorectified and in the process create a planimetrically true orthoimage.

Section 1: Create a New Project
I created a new project using an image of Palm Springs, California by opening the Imagine Photogrammetry Project Manager to create a new block file. I used a Polynomial-based Pushbroom and SPOT Pushbroom in my Geometric Model Category. I set the projection type to UTM, the spheroid to Clarke 1866, and the datum to Nad27 (Conus) with the UTM Zone 11.

Section 2: Add Imagery to the Block and Define Sensor Model
Now it was time to input the images to the block and define the sensor model. I added the images and verified the parameters of the SPOT pushbroom sensor

Section 3: Activate Point Measurement Tool and Collect GCPs
I selected 'Start Measurement' and used the 'Classic Point Measurement Tool'. I then inputted both images using the orthorectified image as a reference to correct the second image. I then collected 2 GCPs where I selected the "Automatic (x,y) Drive' icon after they were collected. I then collected seven more GCPs until I had a toltal of nine. After the ninth GCP, I reset my horizontal reference sources to a different image until I had a total of 11 GCPs. Next, I chose the 'Reset Vertical Reference Source' icon and chose DEM to set the DEM to my reference image file. I then clicked on the 'Update Z Values on Selected Points' icon to obtain the elevation information for the GCPs.

Section 4: Set Type and Usage, Add a 2nd Image to the Block and Collect its GCPs
After I obtained all of the elevation information, I was finished collecting GCPs for the first image. So I added a second image to the block and collected 11 more GCPs in the same manner as for the first image.

Section 5: Automatic Tie Point Collection, Triangulation, and Ortho Resample
Here I used the 'Automatic Tie Point Generation Properties' icon from the 'Point Measurement' tools and set the image to 'all available' and the initial type to 'Exterior/Header/GCP'. I then set the 'Intended Number of Points/Image' to 40. After the tie points were created, I looked at the tie point summary to see how accurate my GCPs were. Next I clicked 'Edit-Triangulation Properties' and changed the 'Iterations of Relaxation' value to 3, to start triangulation. I verified and changed parameters to 'Same Weighted Values' and X, Y, and Z fields to 15. I then ran the model.

I used the 'Start Ortho Resampling Process' icon and set the 'DTM Source' to DEM and input my DEM image. I selected 10 as the output x/y values and verified that the resampling technique was 'Bilinear Interpolation'. I then clicked the Add button and I input my second image. I then ran the model.

Section 6: Viewing the Orthorectified Images
I viewed both of my images into Erdas Imagine to see how successful the image correction was.

Results:
 
Fig. 1: This image was used in Part 1 section 1 to determine the scale of the aerial photograph.

 
Fig. 2: This image was used to determine the  relief displacement of the smoke stack.
 
 
Fig. 3: This figure shows the final orthorectification image
 
 
Sources:
Erdas Imagine, 2009. Digital Elevation Model (DEM) of Palm Springs, CA.
Erdas Imagine, 2009. National Aerial Photography Program (NAPP) 2 meter images.
Erdas Imagine, 2009. Spot satellite images.
United States Department of Agriculture, 2005. National Agriculture Imagery Program (NAIP).
United States Department of Agriculture and Natural Resources Conservation Service, 2010. Digital 
      Elevation Model (DEM) of Eau Claire, WI.
 


Lab 6: Geometric Correction

Goal and Background:
The purpose of this lab was to learn geometric correction skills using image-to-map rectification and image-to-image registration.

Methods:
PART 1 - Image-to-Map Rectification
I brought in a satellite image and a map of the Chicago area in to Erdas. I activated the Multispectral raster processing tool and clicked on Control Points where a pop-up menu appeared and I then selected Polynomial under the Select Geometric Model. I then accepted the default parameters in the GCP Tool Reference Setup. I started to collect my GCPs from the map image where I used the Create GCP tool. I first collected four GCPs where I then adjusted my GCPs until my Total Control Point Error (RMS error) was less than 2.0, and I strived to get below 0.5. Next I used the Display Resample Image Dialog tool to create an adjusted image.

PART 2 - Image-to-Image Registration
This part of the lab was much like the first part, but instead of a 1st order polynomial, I used a 3rd order polynomial for this part where I needed a minimum of ten GCPs instead of a minimum of three. However, I chose to use 12 GCPs just to be on the safe side. I then used the Display Resample Image Dialog tool to create an adjusted image and changed the resampling method to Bilinear Interpolation.

Results:

Sources:
Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Lab 5: Lidar Remote Sensing

Goal and Background:
The purpose of this lab is to gain a basic understanding of Lidar data through its structure and processing. To gain this understanding, the lab required me to process and retrieve various surface terrain models and to create a couple of derivative products from using point clouds.

Methods:

PART 1 - Point Cloud Visualization in Erdas Imagine
I opened an LAS dataset of Eau Claire in ArcMap where I then assigned it a coordinate system of NAD 1983 HARN Wisconsin CRS Eau Claire (US Feet) for the XY coordinate system and NAVD 1988 US Feet for the Z coordinate system. I then brought in a shapefile of Eau Claire County that was given in the lab and noticed that the shapefile covered my study area so I knew the XY coordinate system I chose was correct.

PART 2 - Generate an LAS Dataset and Explore Lidar Point Clouds with ArcGIS

Section 1: Create Folder Connection
I created an LAS dataset of Eau Claire data files using ArCatalog and I observed the metadata to make sure I still had proper horizontal and vertical coordinate systems. I observed the surface menu options to look at the Aspect, Slope, and Contour of various features and noted the differences and similarities between each option. One feature I observed was a bridge crossing a river. To do this, I set 'Points' to 'Elevation' and 'Filter' to 'First Return'. Next I clicked on the 'LAS Dataset Profile View' tool and selected my AOI to be the length and width of the bridge. I then opened a new window and observed the bridge (Fig. 1).

PART 3 - Generation of Lidar Derivative Products

Section 1: Deriving DSM and DTM Products from Point Clouds
In this part of the lab I needed to determine the spatial resolution that derivative products should be produced at by estimating the NPS at which the point clouds were collected. This was obtained from the Point Spacing information given under the LAS Dataset Properties information screen. So I used the LAS Dataset to Raster tool in ArcToolbox in order to create a digital surface model of the first return. I set the Cell Type to Maximum and Void Filling to Natural Neighbor, and the Sampling Value field to 6.56168 feet which is roughly 2 meters. I then used the Hillshade tool under Raster Surface to create a hillshade of the DSM. This hillshade showed a more distinct topographic/geomorphic profile of the study area. I then used the same process to create a hillshade for the DTM model. I put both outputs on one screen in Erdas and used the swipe tool to notice the differences between the two models.

Section 2: Deriving Lidar Intensity Image from Point Cloud
I first set the LAS Dateset to Points and filtering to First Return. I then input the LAS dataset for the city of Eau Claire where I used the LAS Dataset to Raster tool and set the value field to Intensity, Binning cell assignment to Average, and the void fill to Natural Neighbor with a cell size of 2 meters.

Results:
Fig. 1: LAS first return point cloud of a bridge


Fig. 2: Hillshade image of the DSM

Fig. 3: Hillshade image of the DTM
Fig. 4: Intensity image obtained from the Eau Claire LAS dataset
 
 
Sources:
All data was given by Dr. Cyril Wilson in Lab 5

Remote Sensing Lab 4

Goal and Background: The purpose of this lab is to use Erdas Imagine to be able to create a specific area from a larger satellite image, be able to optimize spatial resolution of images for visual interpretation, successfully link to Google Earth to obtain ancillary information, and to be introduced to various satellite image resampling methods, using image mosaicking and binary change detection.

Methods:

PART 1 - Image Subsetting

Section 1: Subsetting with the use of an inquire box
I first opened Erdas Image Viewer. I then opened a TM image that was given to me by my professor. I implemented the raster tool and then right-clicked an area on the image to select the 'inquire box'. I used the inquire to create an study area the encompassed all of the Eau Claire and Chippewa area. I then used the 'Subset & Chip' and 'Create Subset Image' tools to create my subset image which needed to be saved into as an output file. After saving the image, I clicked on the button 'from inquire box' to bring the coordinates covered by the inquire box into the subset interface. After this ran, my subset image had been created (Fig. 1)

Section 2: Subsetting with the use of an area of interest (AOI) shape file
In this section I used the same TM file as I did in section 1. I then added a shapefile of Eau Claire and Chippewa counties to the viewer with my input image. In order to see the shape file I had to change the file type from image (.img) to a shape file (.shp). After I did that, the shape file overlaid the TM image. I then created an AOI around the shapefile by holding down the shift key and clicking on the two counties respectively. Next I clicked on 'paste from selected object' in the toolbar which then created dotted lines and the area now became an AOI. I then saved the new AOI file and employed the 'raster and subset & chip' as I did in section 1 and used the new AOI file to create my subset image (Fig. 2)

PART 2 - Image Fusion

The goal here was to increase the spatial resolution of a coarse resolution image with the use of another image. I used the raster 'pan sharpen' and 'resolution merge' tools to fuse the two images. I input both the panchromatic band image and the multispectral band image into the tool. I then used 'nearest neighbor' where I then opened the metadata of the original image to observe the pixel size. Then, I used image fusion and resampling to resample the image while pan-sharpening it.

PART 3 - Simple Radiometric Enhancement Techniques

I was given an image with major haze issues which needed to be corrected. I used the 'haze reduction' raster tool to reduce the haze and clouds in the image.

PART 4 - Linking Image Viewer to Google Earth

This section aimed to introduce the ability to synchronize Google Earth in Erdas with an image to be analyzed. I then linked and synchronized my view screen and Google Earth to be able to zoom in and out with both images.

PART 5 - Resampling
 
Here I used the original image provided and used to resampling techniques to compare the outcomes. After noting the pixel size, I opened the raster tools and used the spatial, resample pixel size on the input image. I first used the nearest neighbor technique and then used the same process but with bilinear interpolation to observe the differences between the two techniques. Both times I resampled the image from 30x30 meters to 15x15 meters and made sure the pixel sizes remained square.

PART 6 - Image Mosaicking

Image mosaicking is helpful when a study area is larger than the special extent of one satellite image scene, or where the AOI is relatively small but covers the portion that is at the intersection of two adjacent satellite images. I imported my input images, making sure that Multiple Images in Virtual Mosaic and Background Transparent were checked.

Section 1: Image Mosaic with the Use of Mosaic Express
I used the raster mosaic tool, Mosaic Express. I input my two images in the correct order and then ran it. Creating a new image, I could see that this was not entirely desirable as there was a clear distinction between the two images.

Section 2: Image Mosaic with the Use of MosaicPro
MosaicPro is basically just an advanced version of mosaicking. I added the two input images into the tool and made sure that Compute Active Area was set as the default. I adjusted the radiometric properties by using the Color Corrections and Use Histogram Matching for color corrections and used Overlap Areas to match the histograms to the overlapping areas to preserve color and brightness values. I then ran the mosaic.

PART 7 - Binary Change Detection

Here I learned about binary change detection and image differencing.

Section 1: Creating a Difference Image
Inputting a 2011 and a 1991 multispectral image of the Chippewa Valley area through using Two Image Functions, the interface was used to subtract the 1991 image from the 2011 image. Determining the cutoff points for the values that have changed between those years, from the histogram and metadata, I used the equation of mean + 1.5 multiplied by the standard deviation based on the Gaussian distribution shown in the histogram.

Section 2: Mapping Change Pixels in Difference Image Using Spatial Modeler
This was a map of changes of the Eau Claire County area during the same time frame as section 1. I used the following equation to create a model that would remove the negative values from my difference image brightness value:
ΔBVijk = BVijk(1) – BVijk(2) + c 

Where:
ΔBVijk = Change pixel values. 
ΔBVijk(1)= Brightness values of 2011 image. 
BVijk(2) = Brightness values of 1991 image. 
c = constant: 127 in this case. 
I = line number 
J = column number 
K= a single band of Landsat TM.
 
Results:

Fig. 1

 
Fig. 2
 
 
Fig. 3: This image shows the result from Part 2 with the resulting pansharpened image.
 
 
Fig. 4: This image shows the results from Part 6 section 1 after using Mosaic Express
 
Sources:
All images and data were provided by Dr. Cyril Wilson in Lab 4.