This lab was primarily an introduction to the ERDAS Imagine software and using it to perform some basic photo interpretation tasks. The first part of the lab was to perform some calculations using Maxwell's wave theory and the Planck relation to answer some questions about various spectral regions in the electromagnetic spectrum. The second part of the lab was some introduction to ERDAS Imagine. Some tasks performed were displaying data in the data viewer, learning some of the settings allowed within the software, and navigating an image. Navigating an image was rather interesting for me. When using the Pan tool, the program would lock into the Pan mode and just kept panning slowly. Nothing I did got me out of it and the program would not respond. However, when panning by clicking the middle mouse button, it worked just fine, so I will continue panning using that method. We learned how to have a second viewer and to compare different bands of the same image to distinguish certain features. I thought the ease of changing which layer corresponded to the green, red, and blue bands in the Multispectral tab was very useful, and I'm sure I'll use that in the future. I'll probably have to look back at notes at first to recall which type of band combination I want to use to distinguish different features, such as snow, mountains, vegetation type, etc.
The third part was to create a map using a classified image of forested lands in Washington state. I added the data to the ERDAS Imagine viewer and added an area column to the attribute table. One thing I really like about this program is the Inquire Box to select an area to export. Expanding the box by clicking the corner of it and moving it by clicking inside it is very intuitive, and creating a subset image and exporting it was straightforward as well. After creating the subset image, I opened ArcMap and simply added it using the Add Data button. At this point, creating a map was a matter of changing the symbology and description to what I desired (in this case, the forested land classification and area) and adding the essential map elements. Below is my map.
This is a map of some forested lands in Washington state, which displays the various forested land type classifications and the area in hectares. The vegetation is displayed in various shades of green, the bare ground and clouds in beige shades, and water in blue.
Sunday, September 27, 2015
Saturday, September 26, 2015
Lab 5 - Vehicle Routing Problem
In the first part of this lab, we learned how to carry out a Vehicle Routing Problem (VRP) analysis using the Network Analyst tutorials provided by ArcGIS. We learned to service a set of orders with a fleet of vehicles and to find the best routes to service paired orders.
In the second part, we used what we learned to perform a VRP analysis. In this scenario, a trucking company in south Florida has one distribution center and we were to use VRP analysis to create optimal routes for one day's worth of pickups. Not only do we want the shortest route, we want to optimize the revenue and cost. In the first scenario, route zones for distribution center 4 were used exclusively. This means that each route zone had one and only one truck, and that truck could service only that zone, even if an order in another zone is closer. Since there are 14 zones, only 14 trucks are used. For a solution using these settings, there were a few orders that could not be completed and a few orders for which there were time violations.
After this, I created a new VRP using two more trucks. First, for routes in which the status was OK, which is where no orders were missed. For those routes, I set the attribute table to preserve those routes. Then I added two new trucks to the routes layer and solved the VRP. With the new solution, customer service was improved in a number of ways. First, all the orders were able to be completed with the new solution, as opposed to the 6 orders unable to be completed with the first solution. Additionally, there was only 1 time violation with the new route, as opposed to the first solution where more than one route had a time violation. The new solution also increases the revenue by $1625 while increasing the cost by $1152.43 as compared to the first solution, so the new solution slightly increases the profit. So the new solution improves customer service by being more efficient in that all the orders were serviced with fewer time violations. This is also beneficial to the company by increasing the company’s profits as compared to the first solution. Below is a screenshot of the routes after the addition of the two new routes.
In the second part, we used what we learned to perform a VRP analysis. In this scenario, a trucking company in south Florida has one distribution center and we were to use VRP analysis to create optimal routes for one day's worth of pickups. Not only do we want the shortest route, we want to optimize the revenue and cost. In the first scenario, route zones for distribution center 4 were used exclusively. This means that each route zone had one and only one truck, and that truck could service only that zone, even if an order in another zone is closer. Since there are 14 zones, only 14 trucks are used. For a solution using these settings, there were a few orders that could not be completed and a few orders for which there were time violations.
After this, I created a new VRP using two more trucks. First, for routes in which the status was OK, which is where no orders were missed. For those routes, I set the attribute table to preserve those routes. Then I added two new trucks to the routes layer and solved the VRP. With the new solution, customer service was improved in a number of ways. First, all the orders were able to be completed with the new solution, as opposed to the 6 orders unable to be completed with the first solution. Additionally, there was only 1 time violation with the new route, as opposed to the first solution where more than one route had a time violation. The new solution also increases the revenue by $1625 while increasing the cost by $1152.43 as compared to the first solution, so the new solution slightly increases the profit. So the new solution improves customer service by being more efficient in that all the orders were serviced with fewer time violations. This is also beneficial to the company by increasing the company’s profits as compared to the first solution. Below is a screenshot of the routes after the addition of the two new routes.
Monday, September 21, 2015
Lab 4 - Ground Truthing and Accuracy Assessment
In this lab, we investigated our LULC map from last week to assess our classification schemes and determine our skills of aerial photo interpretation. As we obviously could not visit Pascagoula, Mississippi to actually verify our classification, so we used Google Maps instead. I created a new shapefile for which there are fields for the old classification, the new classification, and whether or not they match. I used the Editor tool to create 30 new points, relatively evenly spaced throughout the map and ensuring there was at least one point for every classification type. Then I used Google Maps zoom feature and the Street View feature to match the location of my points to Google Maps in order to confirm the actual land use/land cover classification type. Once I did this, I calculated the percent of sample points that are correct. To calculate the accuracy, I divided the # of correct points by the total # of points and multiplied by 100 to get an accuracy percent. In this case, the original land use/land cover classification scheme is 53% accurate. The new map is shown below. The green points are those correctly classified in the original scheme and the red points were those incorrectly classified. The original classification was performed using only aerial imagery, so distinguishing between similar classification type proved especially difficult (i.e. deciduous vs. evergreen forest or commercial vs. industrial buildings), and this seems to be where most of the inaccuracies were.
Sunday, September 20, 2015
Lab 4 - Building Networks
For this lab, we were to complete a couple of Network Analyst tutorials in ArcGIS, and then to build a network and performing a route analysis with the intent of analyzing networks with and without certain restrictions. I completed the tutorial that walked us through the actual building of a network dataset and the tutorial showing us how to compute the best route. Next, we had to build a network dataset mainly on our own. We were provided with a file geodatabase and the input files required to create the network. Following the lab instructions, I created a network dataset in ArcCatalog, selecting No when asked if I wanted to model the traffic data. Once the network dataset was built, I added the facilities layer as Stops and used the settings in the instructions to calculate the route. It took me a little while to find the information asked for the deliverables, specifically the expression used for certain attributes, which are in the attributes tab of the network dataset itself. Then I ran the route analysis without the traffic data. I looked at the restricted turns and streets features. While inspecting the details of the restricted turns, they seem to be defined using the positions and IDs of the edges of the streets. The restrictions are also categorized by type (a classification of 7 meaning no turns are allowed). The restricted turns are also classified by the mode of transportation allowed or disallowed (some are passable by pedestrians but not by automobile, for example). The length of the restriction is also measured.
Next, I examined the Patterns and Street Patterns tables. The Patterns tables is the "Profiles" table for the network dataset. The profile describes the variation of travel speeds in 15 minute increments over a day. The Streets Patterns tables is the Streets-Profiles table. These records link edge source features with profiles in the Patterns table. By linking these tables together, I could describe the varying traffic speeds over a week. I copied the network features into a new file geodatabase, deleted the old network from the new geodatabase, and created a new network dataset, this time using the traffic data. Using the new network dataset, I created another route using the same facilities which this time includes the traffic data. The results are shown below. Without the traffic data, the route takes 97 minutes and is approximately 95,000 meters (95 km) in length. With the traffic data, the route takes 105 minutes and is approximately 100,000 meters (100 km) in length. The route could be further improved, in my opinion, if we were to add some set amount of time per stop (the default is 0 minutes).
Next, I examined the Patterns and Street Patterns tables. The Patterns tables is the "Profiles" table for the network dataset. The profile describes the variation of travel speeds in 15 minute increments over a day. The Streets Patterns tables is the Streets-Profiles table. These records link edge source features with profiles in the Patterns table. By linking these tables together, I could describe the varying traffic speeds over a week. I copied the network features into a new file geodatabase, deleted the old network from the new geodatabase, and created a new network dataset, this time using the traffic data. Using the new network dataset, I created another route using the same facilities which this time includes the traffic data. The results are shown below. Without the traffic data, the route takes 97 minutes and is approximately 95,000 meters (95 km) in length. With the traffic data, the route takes 105 minutes and is approximately 100,000 meters (100 km) in length. The route could be further improved, in my opinion, if we were to add some set amount of time per stop (the default is 0 minutes).
Route Without Traffic Data
Route With Traffic Data
Tuesday, September 15, 2015
Lab 3 - Land Use / Land Cover Classification Mapping
This lab gave us more experience with recognizing ground features using a true color aerial photograph. For this assignment, we were to digitize an area of Pascagoula, MS, and create a land use/land cover map using the USGS Standard Land Use / Land Cover Classification System. For this assignment, classifying the map up to Level 2 was fine.
First I added the aerial photograph and created a new shapefile, adding 2 fields: one for the two digit code and one for the code description. My main focus when attempting to select areas to digitize was to stay consistent. I used the Editor tool to draw the polygons and add their classifications to the new LULC shapefile. I started with features that were easier to identify and had a more clear starting and ending point; for example, deciduous forest areas surrounded by residential or commercial areas. I started with the forest areas, then moved on to the wetland areas, especially the islands on the west side of the map. The water was easy to identify but tough to classify as the streams meandered and the wetlands were sometimes in the way. Houses have a rather distinctive shape and size and they tend to be clustered into neighborhoods, so they were not difficult to identify. Commercial and industrial buildings were sometimes difficult to set apart, but I figured that commercial buildings are more likely to be nearer residential structures than industrial buildings are. Both, however, tended to be larger and more square or rectangular than houses. Barren land was usually somewhat easy to determine, although the level 2 portion of the barren land category was sometimes difficult to distinguish. I also was running into a bit of a time issue with this one, so later I was not focused on getting every tiny little curve and nuance correct with some of the digitization, but I made sure not to be too inaccurate with it. This was an interesting but time-consuming assignment, and it gives me a whole new respect for those land cover or land use maps we sometimes download as a layer to use in an analysis. Below is my land use / land cover map of Pascagoula, MS.
First I added the aerial photograph and created a new shapefile, adding 2 fields: one for the two digit code and one for the code description. My main focus when attempting to select areas to digitize was to stay consistent. I used the Editor tool to draw the polygons and add their classifications to the new LULC shapefile. I started with features that were easier to identify and had a more clear starting and ending point; for example, deciduous forest areas surrounded by residential or commercial areas. I started with the forest areas, then moved on to the wetland areas, especially the islands on the west side of the map. The water was easy to identify but tough to classify as the streams meandered and the wetlands were sometimes in the way. Houses have a rather distinctive shape and size and they tend to be clustered into neighborhoods, so they were not difficult to identify. Commercial and industrial buildings were sometimes difficult to set apart, but I figured that commercial buildings are more likely to be nearer residential structures than industrial buildings are. Both, however, tended to be larger and more square or rectangular than houses. Barren land was usually somewhat easy to determine, although the level 2 portion of the barren land category was sometimes difficult to distinguish. I also was running into a bit of a time issue with this one, so later I was not focused on getting every tiny little curve and nuance correct with some of the digitization, but I made sure not to be too inaccurate with it. This was an interesting but time-consuming assignment, and it gives me a whole new respect for those land cover or land use maps we sometimes download as a layer to use in an analysis. Below is my land use / land cover map of Pascagoula, MS.
Sunday, September 13, 2015
Lab 3 - Determining Quality of Road Networks
In this lab we were to determine the quality of two different networks. For our purposes here, we are determining the completeness of the road networks. We are defining "completeness" for this lab as the length of the road network, so in this lab, the road network that is the longest is the most complete.
First, I wanted to determine the total length of roads for the two networks in kilometers. I added a length field and calculated the total length for each road network. The total length of the Street Centerlines network is 10,805.8 km, and the TIGER network is 11,302.7 km, so overall, the TIGER road network is the most complete. Then, we were to determine the completeness for each grid polygon. This took me some time to determine how to go about this. I initially attempted a spatial join but this does not seem to allow me to compare the lengths per gridcode. I ended up performing an Intersect of both road networks with the Grid layer separately. Then I used the Summarize tool on the GRIDCODE field to get two data tables, which I joined together, and then with the Grid layer, so that I could compare the length of the road networks by their corresponding gridcode. I displayed the data on a map, shown below.
Using the equation found in the lab for the % difference and by using a diverging color ramp, a negative value (shown in blue) means that the TIGER road network is more complete than the Street Centerline network, and a positive value (shown in red) means that the Street Centerlines network is more complete than the TIGER network. The areas coinciding with a more equal level of completeness toward the left center of the map seem to be located near cities such as Medford and Ashland, Oregon (cities not shown on map). Overall, the TIGER network seems to be a little more complete or nearly even with that of the street centerlines network. However, the street centerline network does seem to do a little better near the edges of the map, especially the northwest and western edges. There are also a few strongly positive values in the southeast quadrant of the map.
This was an interesting lab in that it made me think about the differences in road networks and that we take road maps for granted all the time. I think the discrepancies are seen most in the real world when using GPS devices in rural areas or near dirt roads, as they do not see those very well, and I would expect a road network to be most accurate in or near larger population centers.
First, I wanted to determine the total length of roads for the two networks in kilometers. I added a length field and calculated the total length for each road network. The total length of the Street Centerlines network is 10,805.8 km, and the TIGER network is 11,302.7 km, so overall, the TIGER road network is the most complete. Then, we were to determine the completeness for each grid polygon. This took me some time to determine how to go about this. I initially attempted a spatial join but this does not seem to allow me to compare the lengths per gridcode. I ended up performing an Intersect of both road networks with the Grid layer separately. Then I used the Summarize tool on the GRIDCODE field to get two data tables, which I joined together, and then with the Grid layer, so that I could compare the length of the road networks by their corresponding gridcode. I displayed the data on a map, shown below.
Using the equation found in the lab for the % difference and by using a diverging color ramp, a negative value (shown in blue) means that the TIGER road network is more complete than the Street Centerline network, and a positive value (shown in red) means that the Street Centerlines network is more complete than the TIGER network. The areas coinciding with a more equal level of completeness toward the left center of the map seem to be located near cities such as Medford and Ashland, Oregon (cities not shown on map). Overall, the TIGER network seems to be a little more complete or nearly even with that of the street centerlines network. However, the street centerline network does seem to do a little better near the edges of the map, especially the northwest and western edges. There are also a few strongly positive values in the southeast quadrant of the map.
This was an interesting lab in that it made me think about the differences in road networks and that we take road maps for granted all the time. I think the discrepancies are seen most in the real world when using GPS devices in rural areas or near dirt roads, as they do not see those very well, and I would expect a road network to be most accurate in or near larger population centers.
Tuesday, September 8, 2015
Module 2 - Visual Interpretation
In this lab, we learned to interpret figures in aerial photos using several criteria: tone, texture, shape and size, shadow, pattern, and association. We also compared true color vs. false color (near-infrared) imagery.
First, we wanted to identify features on a photo based on tone and texture. I used the drawing tool in ArcMap to create polygons that enclosed areas of different tones, ranging from Very Dark to Very Light. I then created polygons enclosing areas of different textures, ranging from Very Coarse to Very Fine. The objective here was to determine what type of feature would show up as what tone or texture. For example, a group of houses displays as a very coarse texture as they are spread out, where water is a very fine texture. Below is my map layout identifying features based on tone and texture.
Next we wanted to identify features based on four criteria: shape and size, shadow, pattern, and association. Shape and size define what an object looks like, such as in my map, one of the buildings labeled "looks like" a house. Shadows are an interesting resource when identifying features. They can be a help or hindrance. Some can hinder where they block out other features, but they are often invaluable in determining the extent or height of a tall object, especially if it's narrow and hard to see otherwise. Pattern is used to identify groups of objects that seem insignificant individually but make up a larger feature. One good example is if you are attempting to identify farmland based on crops grown there. Association is a tricky criterion. This is used when a feature could have a variety of uses, but you identify it or narrow down the options by associating it with something nearby. There are examples of all these criteria used to identify features in the map below.
Finally, we wanted to compare a true color photo to a false color or near-infrared (NIR) photo. I examined features that I could identify by their color in the true color photo and compared the color to that of the NIR photo. The most obvious difference was in the vegetation. They appear green in the true color and red in the NIR photo because plants reflect more NIR radiation. This is seen clearly when looking at the mixed pine forests southwest of the river. The fairways and greens of the golf course also show this clearly. Clear water appears black but water with sediment in it appears blue in the NIR photo because it is reflecting visible light, which explains why the river and marshland are blue and bluish-green in the NIR photo. The concrete bridge and sandy, bare earth areas don't look much different between the two photos.
This was a really interesting lab, and more challenging than it appears, as I do not have a lot of practice identifying features from aerial photography. It is a good skill to have though. I look forward to the next module.
First, we wanted to identify features on a photo based on tone and texture. I used the drawing tool in ArcMap to create polygons that enclosed areas of different tones, ranging from Very Dark to Very Light. I then created polygons enclosing areas of different textures, ranging from Very Coarse to Very Fine. The objective here was to determine what type of feature would show up as what tone or texture. For example, a group of houses displays as a very coarse texture as they are spread out, where water is a very fine texture. Below is my map layout identifying features based on tone and texture.
Next we wanted to identify features based on four criteria: shape and size, shadow, pattern, and association. Shape and size define what an object looks like, such as in my map, one of the buildings labeled "looks like" a house. Shadows are an interesting resource when identifying features. They can be a help or hindrance. Some can hinder where they block out other features, but they are often invaluable in determining the extent or height of a tall object, especially if it's narrow and hard to see otherwise. Pattern is used to identify groups of objects that seem insignificant individually but make up a larger feature. One good example is if you are attempting to identify farmland based on crops grown there. Association is a tricky criterion. This is used when a feature could have a variety of uses, but you identify it or narrow down the options by associating it with something nearby. There are examples of all these criteria used to identify features in the map below.
Finally, we wanted to compare a true color photo to a false color or near-infrared (NIR) photo. I examined features that I could identify by their color in the true color photo and compared the color to that of the NIR photo. The most obvious difference was in the vegetation. They appear green in the true color and red in the NIR photo because plants reflect more NIR radiation. This is seen clearly when looking at the mixed pine forests southwest of the river. The fairways and greens of the golf course also show this clearly. Clear water appears black but water with sediment in it appears blue in the NIR photo because it is reflecting visible light, which explains why the river and marshland are blue and bluish-green in the NIR photo. The concrete bridge and sandy, bare earth areas don't look much different between the two photos.
This was a really interesting lab, and more challenging than it appears, as I do not have a lot of practice identifying features from aerial photography. It is a good skill to have though. I look forward to the next module.
Sunday, September 6, 2015
Lab 2 - Determining Quality of Road Networks
This week’s lab was about testing the horizontal accuracy of
road networks while following the procedures of the National Standard for
Spatial Data Accuracy (NSSDA). After downloading
the data and creating street networks for both the City and the StreetMap USA
data sets, I used the Sampling Design Tool addin for ArcGIS to create 100
random junction points, starting with the City data. I wanted a total of 20 “well-defined”
points, which in this case are road intersections. I picked 20 intersections in
a reasonably well-distributed manner on the City dataset; I then turned on the
StreetMap USA data layer and matched the junctions on that map. Below is a
screenshot of my 20 points using the City data.
I created new layers from these selections, so that I now
had layers showing the 20 points I was using to calculate the positional
accuracy. Then I created a blank reference data layer, in which I used the
orthophotos provided to create 20 reference points, again matching the
intersection (I used the center of the intersection for each point). When
looking at the two datasets as well as the orthophotos, some of the StreetMap
USA roads went through buildings, and the road network seemed incomplete in
places. The City data seemed much more complete and seemed to go down the
middle of the roads indicated on the orthophotos, so the City data appeared
more accurate. At this point, I used the Add XY coordinates tool for all 3
layers, and imported the data into 2 Excel tables – one for the City data and
one for the StreetMap USA data. Using the NSSDA worksheet provided, I quickly
calculated the sum, average, RMSE, and the NSSDA accuracy.
Using the NSSDA procedures, I was able to create an accuracy
statement for both datasets:
City Data: This data has been tested to meet 19.4 feet
horizontal accuracy at the 95% confidence level.
StreetMap USA Data: This data has been tested to meet 686.7
feet horizontal accuracy at the 95% confidence level.
As was suggested by the map appearance when overlaid over
the orthophotos, the City data is much more positionally accurate than the
StreetMap USA data.
Subscribe to:
Posts (Atom)