Monday, July 13, 2015

Week 8 - AppsGIS - Damage Assessment

This week we looked at damage assessment, using information for Hurricane Sandy. We started by mapping the path of the hurricane, showing the category of the storm, barometric pressure, and wind speed at different points. The map shows the countries, as well as states within the United States, impacted by the storm.

We then "prepared" the data by adding in aerial photos of before and after the storm, so that the images could be compared. We did this using the flicker and slider tools in the Effects Toolbar. We also used Editing sessions to add attribute data, using attribute domains and changing their properties to utilize both codes and descriptions (i.e. for Structure Type: 1 - Residential, 2 - Government, 3 - Industrial, 4 - Unknown). 

A major part of our analysis was creating new damage data for the imagery. An Edit session was started for a newly created feature class (with the domains created beforehand). Using the pre-storm imagery, buildings were selected, and then their domains set to describe the building. Categories were structure damage, wind damage, inundation and structure type. In order to perform this, used the "create features" window to select the point option, and a point was placed on a building within our study area. Then clicked on the attribute tab so that the attributes could be adjusted according to the evaluation. The Swipe tool was used to compare the before and after.

Digitized all of the buildings in the study area this way, changing the attributes after placement to reflect the post-storm imagery. The resulting points were categorized based off of structure damage level:



 I chose to fill in all of the values for each point, not just structure damage level.

Afterwards, we used a new polyline feature class to show the location of the coastline prior to the storm. A simple straight line was used because we wanted to show the number of buildings affected (and how) in relation to the coastline (i.e. how many buildings within 100 meters of the coast have major damage?). In order to determine the number of buildings within each distance, used the Select by Location tool to select the parcels within the distance. For 0-100, did a simple “select features from” selection, with the Coastline layer as the source layer, and the selection method of “are within a distance of the source layer”. For both the 100-200 and 200-300, did a select features from, with the maximum distance, followed by a “removed from the selection” for values less than the intended range. Final results were exported to a new layer. To determine the number of buildings at each structure damage level, used the Select by Attributes tool within each (new) layer’s attribute table. This was done in order to be able to look at patterns of damage seen.
 

After we did the above, we combined the spatial assessment data with the parcel data, so that we could match the attributes within the table. This was accomplished using the Attribute Transfer tool. This was done for approximately 1/3 of the buildings within the study area.

I thoroughly enjoyed this week's exercise and learning how damage assessments are formed. It helped make the limitations of aerial imagery more apparent, such as when it is hard to tell if damage is from the wind or not. It will definitely come in handy the next time there is a hurricane here!

Saturday, July 4, 2015

Week 7 - AppsGIS - Coastal Flooding

This week we worked with Coastal Flooding from sea level rise and from storm surge. For the sea level rise, we looked at two scenarios, one with a 3 foot rise, and one with a 6 foot rise. We then mapped the results and how it relates to population density. Later, for storm surge analysis, we compared two different DEM models.

In order to do look at the impact of sea level rise, we started with a DEM raster of the area, which we then wanted to extract only those cells that would be impacted by the associated rise in sea level. To this, the Reclassify tool was used, and the data was reclassified so that only the values of interest (to 3 feet or 6 feet) were included, all others were changed to "NoData". The resulting attribute table was looked at to determine the number of cells that are flooded (value of 1) and not flooded (value of 0). 
We then looked at the Properties of the Layer in order to determine that the cells of the raster are 9 m2. This was multiplied by the number of cells within each floodzone to determine the area.

To analyze the depth of flooding, I used the results from the Reclassify Tool as the input in the Times tool, in order to create a new raster of only the floodzone elevations. In order to get the flooded depth, used the Minus Tool, with either the equivalent of either 3 feet or 6 feet, and the results from the Times Tool as the second input. These results were then mapped against the population density of the census tracts (this map is only for a rise of 6 feet):
Figure 1. Map of District of Honolulu showing impact of 6 foot sea level rise, and
includes population densities of the region.

We then wanted to look at the social vulnerability of the are, with data from the 2010 Census for the area. In order to do this, we first had to determine which census blocks were located within the floodzones. For our analysis, we chose to select those whose centroid was located in the floodzone, using the Select By Location tool. Looking at the Attribute Table, we were able to determine how many blocks were selected, and what population this affects. This can be done simply looking at the Statistics for the column of choice, as only the selected rows are summarized.

Next, we had to add fields in the Census tracts layer for each of the groups of interest, percent white residents, percent owner-occupied homes and percent homes with people over the age of 65. Table joins were then conducted in order to copy over the information. Each join was removed prior to joining another. The Field Calculator was used to fill in the data. 

This was then repeated for the Census Blocks layer, with additional fields added for the population of white residents, owner-occupied homes, and those over the age of 65. For our analysis, the census blocks did not include the make-up of each block, so the census tract data was used. For this, it was assumed that the population composition for the whole was equal for the part. A Table Join was created so that the percentage could be copied over. This percentage was then used to determine the size of the population for each of the three social statuses. Used the Select by Location tool to select the blocks that had their centroid located within the floodzone, as above, for both 3 and 6 feet. Then used the Statistics function to get the sum for each population in order to fill out the table in Deliverable 7. To get the values for the nonflooded areas, simply switched the selection. Did this for all of the variables in the table. Then divided the populations for each variable by the total population for that category (3 feet flooded, 3 feet not-flooded, etc).

The results were as follows:

Variable
Entire District
3 Feet Scenario
6 Feet Scenario


Flooded
Not-flooded
Flooded
Not-flooded
Total Population
1,360,301
8,544
1,351,757
60,005
1,300,296
% White
24.7 %
36.8 %
24.7 %
29.6 %
24.5 %
% Owner-occupied
58.2 %
32.2 %
58.3 %
38.1 %
59.1 %
% 65 and older
14.3 %
17.11 %
14.3 %
17.0 %
14.2 %
Table 1. Percent of population represented by each group in flooded areas compared to not-flooded areas. Results are for 3 and 6 feet of sea level rise.
After this analysis, we looked at storm surge in Collier County, Florida. The purpose for this analysis was to compare the results of two different DEMs, one by USGS created using older methods (and lower resolution), and one created using Lidar. In order to compare the two, percent error in omission (not including data that should be) and percent error in commission (false positives). For our analysis, the Lidar data was treated as accurate for the calculations. The results were quite different, and showed the major benefits of Lidar techniques, with most errors of commission being above 100%.

Being a resident of Florida, this week's analysis was quite intriguing and interesting to do. I especially liked being able to look at how seeming small levels of sea level rise can affect such large areas. I certainly used the NOAA Sea Level Viewer to see if my house was in danger, especially since our yard floods quite a bit in heavy rains. I expect to use this new knowledge quite a bit in my future.

Monday, June 29, 2015

Week 6 - AppsGIS - Hotspot Analysis

This week we looked at hotspot mapping, using crime data. We compared the results of three different methods: Kernel Density, Local Moran’s I, and Grid Method. We used different data to determine things such as burglaries per housing unit and homes for rent per census tract. We also used the graphing abilities of ArcMap in order to compare the results, such as:

Figure 1. Graph of burglary rate per housing unit compared to
number of housing units that are rented.

We also looked at Kernel Density hotspots, based off of the average, twice the average, three times the average, etc.

Figure 2. Kernel Density analysis of crimes.

Finally, we performed all three methods of analysis on the same dataset of burglaries in Albuquerque, New Mexico in 2007. We wanted to see how the results compared to one another, as well as how well it predicted crimes in the following year.

For the grid-based thematic mapping, first a Spatial Join was created in order to combine the grids with the burglaries in 2007. A SQL query was set: Join_Count = 0, then switched the selection to choose all grids with at least 1 crime. This was exported into a new shapefile of the grids where crime occurred. Then sorted the attribute table by descending crime counts for each grid. The grids with the top 20% of the number of crimes were selected and exported as a new shapefile. In this attribute table, added a new field named “Dissolve” and the field calculator was used to set the values to 1 for all of the grids. This way, the Dissolve tool could be used in order to create one polygon as the result.

Figure 3. Grid-based thematic mapping result of burglary hotspot areas
(top 20% number of crimes per grid).

For Kernel Density, the Environments were set to those of Grids for both Processing Extent and Raster Analysis Settings. For the Kernel Density tool, the Burglaries dataset was used with a search radius set at .5 miles, or 2640 feet. The symbology, was adjusted in order to determine the mean value (of crimes) when areas with 0 crimes were excluded, and then categories were adjusted to 0 – mean, mean to 2xMean, 2xMean to 3xMean, etc. Reclassified the raster so that all values below 3xMean equal to NoData, and all values above are classified as 1. This was then turned from Raster to Polygon, and then dissolved into one polygon.

Figure 4. Kernel Density result of burglary hotspot areas
(higher than 3 times the average).

In order to perform the Moran’s I analysis, I first performed a Spatial Join as before. The Match Option was set to “Contains” so that the count would be for the number of points within each polygon (census block). One large polygon was removed because it was outside of the police jurisdiction, and was impacting analysis. Within the attribute table of the shapefile, a New Field was added for Crime_Rate. Using the Field Calculator, Crime_Rate was set to = [Join_Count] / [HSE_UNITS] * 1000. This divides the number of crimes per census block by the number of housing units, and then multiplies this number by a thousand, which is a common threshold for this type of analysis. Cluster and Outlier Analysis (Anlesin Local Morans I) was then performed, with the Input Field set as the calculated crime rate. Within the attribute table of the resulting shapefile, used Select by Attributes to select the entries with High-High (HH) cluster results. This refers to the areas with a high crime rate in close proximity to other areas with high crime rates. The selection was then exported to create a new shapefile. Dissolve tool was used to create a single polygon out of the resulting hotspots.

Figure 5. Local Moran’s I result of areas with high-high crime rates.
These maps were then combined onto one map in order to show how the analyses overlap. Afterwards, additional steps were taken in order to determine if the hotspots accurately determined the area of high crime for the next year (2008). Analysis was based primarily on crimes per square kilometer within each determined hotspot, as this gives the best picture.

Figure 6. Map output showing the overlap of the three hotspot analyses.



  

Monday, June 22, 2015

Week 5 - AppsGIS - Spatial Accessibility

This week we worked with Network Analyst in order to learn more about spatial accessibility modeling. For the first part of the assignment, we used some of the ESRI tutorials. These tutorials were very easy to follow, and made the process go smoothly. My only complaint is that it was annoying to have to go back and forth between windows trying to follow the directions. I have gotten used to using my tablet to read the lab instructions, while working on the lab on my laptop. Makes for a much easier time. Unfortunately, this is not an option when working through ArcGIS Help.

Additionally, we worked with data looking at hospital accessibility in Georgia. Much of the analysis was performed using the Join features followed by working in Excel. I have much more experience in Excel, but had not done much with data from ArcMap. Our results were primarily shown in either tables, or in Cumulative Distribution Functions (CDF), such as below:
Figure 1. Distance to nearest psych hospital, shown by age group. This graph shows
that more of the elderly population live farther from hospitals than those under 65.
 Finally, we used Network Analyst to look at spatial accessibility of community colleges in Travis County, Texas. We looked at the service areas of seven colleges in the area, with 5, 10, and 15 minute drive times. We first used the New Service Area function to set the colleges as facilities, and adjusted the settings so that the impedance was set to the time intervals. We solved the analysis to obtain a total of 21 service area polygons. This was repeated after removing one of the colleges from the data set, Cypress Creek Campus. The comparison resulted in the following:
Figure 2. Map showing the comparison of the service areas of the community
colleges 
of Travis County, Texas

We then used Closest Facility analysis to determine the closest facilities (colleges) to each of the census blocks. We again performed two analyses, one for all seven schools, and one for six schools (due to removal of Cypress Creek Campus). Luckily, you don't have to set up the parameters each time you want to run the analysis. After the analysis was performed, the tables were joined so that the information could be compared. We had to adjust the tables, though, as the FIPS code was not included in the tables of the Closest Facility analysis. Using Excel, the information in the completed table were analyzed to determine the spatial access for potential students in the area, before and after the closure of the college.

Looking at the attribute table, we had to select only those census blocks that were affected by the closure, and use the information within the attribute table to determine how they were impacted through changes in drive time. Lastly, we created a CDF of the information:
Figure 3. Resulting CDF for potential students affected by closure of Cypress Creek Campus.
I enjoyed learning about spatial accessibility this week, and feel that I am quite capable in this type of analysis. This clearly has many different potential applications in GIS analysis, and I am already thinking about how it can be used at my work.

Monday, June 15, 2015

Week 4 - AppsGIS - Visibility Analysis

This week we worked on visibility analysis, using the Viewshed and Line-of-Sight tools. We worked with 4 different scenarios: viewshed looking at tower placement, security camera placement via viewshed, line of sight among summits, and and visibility of portions of Yellowstone State Park from roads.

For the security camera analysis, we worked with a raster that showed the finish line for the Boston Marathon, and were tasked with adding more cameras that could see the finish line. The view for the given camera was first for a 360 degree view at ground level. This is not the case of a typical camera, and so was later edited to account for it being on the side of a building (100 feet high) and a 90 degree view.
Figure 1. Visibility of camera near the finish line of Boston Marathon.
This is based off of 360 degree view of the camera at ground level.

The task for this portion of the assignment was to place two new cameras that would better cover the finish line. We had to place the cameras, adjust their horizontal viewing angle, and their vertical height. For the two cameras, one  was close to the finish line (Camera 2) and one on the opposite side of the finish line from the first camera (Camera 3). Camera 2 was placed in a building to the north of the finish line, on the north side of the road. The vertical offset for this camera was 75 feet, determined by a digital elevation model that included buildings. The viewing angle for Camera 2 was set to 90 - 180 degrees. This part took quite a bit of tweaking, as the degrees were not as expected and it took a while to get them right. I still am not sure why this was the case. Camera 3 had a viewing angle of 180 - 270 degrees, and was as expected visually. This camera was set to a 100 foot vertical offset, and was located about half a block west from the finish line. To show the overlap of the viewsheds, they were ranked by number of cameras that could see each cell, as shown below:
Figure 3. Overlap of viewsheds for cameras placed near the Boston Marathon finish line. Dark blue represents areas that are visible from all three cameras.
I was pleased with how this analysis came out, as the area around the finish line is quite visible. A way to improve this analysis, in my opinion, would be by ranking the distance from the camera as well. A camera does not see as well far away as it does close up, and this should be taken into consideration. I worked with closed circuit television (CCTV) monitoring, and have seen this first hand. Visibility analysis is clearly a tool that can be used in a multitude of applications, and it was neat to see how my other classmates felt it could be used. That is definitely a benefit of this class, that we all have different backgrounds, so see different "big pictures".

Monday, June 8, 2015

Week 3 - AppsGIS - Watershed Analysis

Figure 1. Final map comparing modeled and given streams and a watershed on the island of Kuauai.

This week we worked on watershed analysis of the Hawaiian island of Kuauai, comparing modeled results with actual streams and watersheds. First, we performed watershed delineation using streams as pour points. In order to do so, the digital elevation model (DEM) was filled-in using the Fill tool to remove any sinks. Most of the sinks removed were in the low elevation areas to the west side of the island. Now that the model is hydrologically correct, the Flow Direction tool was used to establish how the streams will flow. Following this, we used the Flow Accumulation, with the flow direction raster as an input, which resulted in a stream network:
Figure 2. Modeled stream network.
We then set a condition that all of the streams are defined by having at least the flow of  200 cells accumulated downstream. The resulting raster was turned into a feature class via the Stream to Feature tool. We additionally created a stream order raster that used the Strahler method to order the streams created via the Conditional tool.

The next part of our analysis was delineating a watershed using stream segments (Created with the Stream Link tool) as the pour points. Using the Basin tool, we then used the edges of the DEM to delineate drainage basins:
Figure 3. Delineated basins using the edges of the DEM.
Alternately, we used the river output as the pour point to delineate watersheds. This required us to use Editor to mark the pour point at the mouth of the river of the largest watershed (dark green in the image above), known as the Waimea watershed. This pour point was at the edge of the DEM, which is why it matched the basin result above. We also used a pour point in the middle of the DEM, which was a gauging station used by USGS. This station was not on a modeled stream, so the Snap Pour Point tool was used to correct for this. The Watershed tool was again used to create a watershed raster for the specified gauging station:
Figure 4. Watershed raster based off of USGS gauging station.
Finally, we compared our modeled results from above with streams delineated based off of aerial photos, and previously mapped watersheds. For the streams, it was quite apparent that modeled streams are quite different than given streams at extreme elevations, however, they "match" quite nicely at mid-elevation.
Figure 5. Modeled streams (light blue) compared to given
streams (dark blue) at low elevations.
Figure 6. Modeled streams (light blue) compared to given streams (dark blue) at high elevations.
Figure 7. Modeled streams (light blue) compared to given 
streams (dark blue) at mid-elevation.
For the watershed analysis, I chose the Wainiha watershed to model, with the pour point located at the output of the river. Looking at the modeled and given watersheds, they lined up quite nicely. 
Figure 8. Modeled watershed (light purple) compared to given 
watershed (red outline). There was little excess in the modeled
output, but the northernmost point was "missing".
The analyses we used this week were very interesting, and I can see how they will be highly beneficial later down the line. I liked the fact that we performed the analysis using different tools and methods so that we can see the options available to us. 

Monday, June 1, 2015

Week 2 - AppsGIS - Corridor Analysis

Figure 1. Raster of the proposed black bear corridor between two sections of national forest.
This week we worked on least-cost path and corridor analysis. For the first portion, we looked at a few different least-cost paths for a pipeline by creating cost surfaces for slope and proximity to rivers. We created three different scenarios by changing the cost of being close to rivers. For the first path, we only looked at a slope, which was reclassified so that the lowest cost was for low slopes (<2°) and highest for steep slopes (>30°). A cost surface raster was created, followed by a cost distance raster, with accumulative costs as you move away from the source. The source is at the top of the image, indicated in light blue, and the destination is represented with a dark blue asterisk. With this analysis, there are 4 river crossings, which were determined using the Intersect tool. A backlink raster was also created, so that a least-cost could be created using the Cost Path tool. 
Figure 2. Scenario 1, showing least-cost path with slope as the cost surface.
For the second scenario, we created a cost surface with a high cost for rivers, which resulted in fewer pipeline intersections. In order to combine the two cost surfaces, the Raster Calculator was used. For the third scenario, we used a high cost for rivers, and a slightly lower cost for the area close to a river (within 500 m). We again had two intersections, but they were at different areas. The following image compares the two:

Figure 3. Scenarios 2 and 3, with the path for Scenario 2 in darker red, and Scenario 3 in brighter red. Both paths cross the rivers a two points, but the location varies due to adding cost to being within 500 m of a river.
Additionally, we created a corridor for the same pipeline. Some layers could be reused, but we had to perform cost distance again, this time with the destination as a "source" since the Corridor tool requires two source inputs. After using the Corridor tool to create a range of possible paths, symbology was adjusted to represent 105, 110 and 115% of the minimum value. My image is slightly off from the example in the lab, I believe that this is due to differences in rounding when calculating the path values. I tried several different sets of numbers, to no avail. The following image is the result: 

Figure 4. Corridor result for the pipeline, with the least-cost path for the third scenario (high cost for rivers, lower cost for adjacency to rivers). Darker corridor is most similar to least-cost path, at 105% of the minimum cost value (path).


Finally, we conducted corridor analysis for a black bear corridor between two fragments of the Coronado National forest. In order to determine the best corridor, cost surface analysis for elevation, land cover and proximity to roads was conducted. Cost determination was based on the parameters that black bears prefer mid-elevation areas, prefer to avoid roads, and prefer forest land cover types. The three cost surface rasters were combined using the weighted overlay, with landcover having the highest weight. This result was then inverted so that the the higher suitability has the value of 1, and the lowest suitability has a value of 10. This was accomplished using Raster Calculator.

Corridor analysis was then performed, using both fragments of the national forest as sources. The same values were used in order to determine a suitable corridor (105, 110, 115% minimum value). All values above 115% were reclassified as NoData in order to create a raster with only the corridor and source areas. A final map was created in order to showcase the results:
Figure 5. Final output map for the black bear corridor. Map shows corridor areas ranked by suitability (1-3). 
Overall, I became fairly familiar with the Cost Distance, Cost Path, and Corridor tools. These tools clearly have many advantages when trying to determine the best area to place a path or corridor. Also, I learned the benefit of using the Hillshade tool over simply using the hillshade option when trying to adjust symbology, especially for elevation. I feel confident in this week's exercise and being able to implement it in the future.