Monday, March 30, 2015

Module 12 - Geocoding and Network Analyst

This week, we learned how to create an address locator, conduct geocoding by address matching and matching unmatched addresses, to create a route analysis layer with Network Analyst, and how to compute optimal routes for our stops. The larger picture, the product we are creating, is a map of EMS locations within Lake County, and a sample route from one of them.

First, I downloaded TIGER/Line shapefiles for the area in question from the U.S. Census Bureau website and imported them into our geodatabase. Using ArcCatalog, I found my geodatabase and created a new address locator. In ArcMap, I added the roads and EMS features and selected Geocode Addresses on the EMS data table. The geocoder automatically matched our addresses, and a good guideline is to have an unmatched address rate of 5% or less. My unmatched rate after the geocoder finished was 32%, so I needed to manually rematch the unmatched addresses. Under the rematch window, I selected Unmatched Addresses and saw the list of candidates under each one I selected. I zoomed into the first candidate suggestion just to see a closer look at the area. Then I went to Google Maps and found my unmatched address there. I visually found that location on my map and used the Pick Address from Map tool to select that spot to manually rematch the unmatched addresses. There was one I was unable to find, which left the unmatched rate at ~5%.

Next was selecting a route using Network Analyst. Network Analyst calculates the nearest network location and symbolizes the stop with the "Located" symbol. Using the toolbar, I selected the address locator that I had created. I found the EMS site at 11 Yorkshire Drive in Paisley as that is the EMS site I want to create a route from. Using network analysis to select a route here is just a matter of clicking on the map where I want a stop to be. I selected 3 stops for my route. I had some trouble with my initial route, as the stops kept showing up as "unknown address", even though the rest of the route and directions were working correctly (it used the edges in the roads file for directions). I don't know what caused the issue, but when I selected 3 new stops (a totally different route) it worked fine. I set up the parameters for analysis in the Network Analysis toolbar and computed the best route in distance and time.

Then I created the map. To do this properly, I needed to grab the county boundary file from a previous lab. I projected the data to the projection currently being used in this lab and made a new layer with just the Lake County boundary (using Select by Attributes). I added a second data frame to show my optimal route from EMS Station 11 in closer detail. In the first data frame I added an extent frame to show where in the county EMS Station 11 and my route was located. I arranged the map and added the essential map elements, and the final map is shown below.


This week's lab wasn't quite finished yet. We learned about ModelBuilder, which is an application that allows you to create, edit, and manage models. Models in this context are defined as workflows that string together sequences of geoprocessing tools, feeding the output of one tool into another tool as input. This was an online ESRI training module that taught us the basics of using this application. I like this tool as it helps me to visualize what I'm trying to do, both the individual steps and the larger picture of the final product, especially if there are a lot of intermediate steps. It makes it easier when you're buffering different layers to different geographic extents; I feel it helps me to keep track of what input is affecting what output. I find that using a model will likely help me "solve" a problem in that it's easier to see what kind of data I need to collect to produce a desired output. Learning about ModelBuilder was my favorite part of this module.

Friday, March 27, 2015

Module 10 - Dot Density Mapping

This week we learned about dot density mapping, which describes a map that uses a dot symbol to show the presence of a feature or phenomenon. In this lab, we learned how to join spatial and tabular data, utilize dot density symbology and how to select a suitable dot size and unit value for our map. We learned about the advantages and disadvantages of dot density mapping and how to utilize mask functions in ArcMap to manipulate dot placement.

This week, we wanted to make a dot density map of the population density of south Florida. This map was created entirely using ArcMap. We wanted our final map to show the population density of urban areas of south Florida. I added the South Florida layer and used the join feature to join the data from the Excel sheet provided to the attribute table of the south Florida layer. I used the symbology feature to place the dots; I wanted to pick a dot size and value that would show the density properly without coalescing them too much, and ended up deciding on a dot size of 4 and a dot value of 20,000 (meaning 1 dot = 20,000 people). After adding the urban land and surface water layers to my map, I wanted to mask the dot map so that dots were only displayed in the urban land area. This is where I (and many others) ran into problems, because the mask always went back to the surface water layer. After reading the discussion board (and trying to avoid making a second .mxd if possible), I tried dragging the urban land layer to the top of the Table of Contents, thinking that ArcMap might be trying to draw the dots before the urban land layer. Not only did this work, I could then drag the south Florida layer back to the top without losing the mask. I labeled 4 of the larger cities and the largest lake in south Florida, and I turned off the county boundaries. I added another version of the south Florida layer underneath to give it a nice background color, and I gave the overall background a nice gradient fill color, which I feel helps make the map stand out. Below is my map of population density of south Florida.


Tuesday, March 24, 2015

Week 11 - Vector Analysis 2

This week we were doing more with vector analysis. This was a lab with a lot of new material. We learned about a couple of the more common modeling tools in ArcGIS, buffer and overlay. We learned about the dissolve tool, which merges overlapping borders of buffer zones. One of the most interesting aspects of this week's lab was learning about ArcPy and Python scripting to run the buffer tool. We learned about simple buffers and variable distance buffers. We learned about the 6 overlay operations available and when to use each one, and to convert between singlepart and multipart layers.

This lab was very interesting and I learned a lot about what kind of buffers to use and when. Our objective for this map was to show areas for potential new campground locations. One of our first buffers was on the Roads layer. We wanted our campground location to be within 300 meters of a road for ease of access. We also wanted our campground to be within 150 meters of a lake (for recreation) and within 500 meters of a river (a little further away to avoid pollution of the river and flooding of the campsite).

The first thing I did was to created the roads buffer and used dissolve to merge borders of overlapping buffer zones. For the water layer, I needed to use a variable distance buffer (lakes and rivers buffers are different distances), so I needed to insert a new field into the attribute table, select by attributes (to select lakes or rivers), and type in the desired buffer distance depending on the attribute. I created a buffer as described for the roads layer, this time using the "buffdist" field and the "List" dissolved type. At this point the lab assignment had us create different buffers at once using Python, which I feel is a very valuable time-saving tool. In both the water_buffer and road_buffer layers, I added a field that ends up having a value of "1" or "0". Basically, a "1" means that feature is within the buffer and a "0" means that it is not. Our campground needs to fit the requirements of both buffers, so I created a union between the two buffers; this layer indicates only features that follow the parameters of what we want concerning the road and water layers. Now I needed to consider the conservation areas, as we don't want a campground in these areas. I added that data layer and used the erase tool along with my union layer to exclude any conservation areas from my output. One final step was to convert from multipart to singlepart to show individual features on the attribute table. This layer is what is displayed on my map in dark green and satisfies all the conditions we want for our new campground. I then arranged my map and added the essential map elements. I used a rectangular gradient fill and drop shadows to give the map a better look.

This lab really packed a lot of information in and I learned a lot, especially about the variable distance buffers and using ArcPy. I appreciated that this lab made me think about why I was doing a certain step - I always had to think about what information the output was showing me, which is important when making a map. It's also useful (to me) to have a "reason" or a "project" (in this case sites for a new campground), as it helps me understand the concepts better.


Friday, March 20, 2015

Module 9: Flow Line Mapping

This week's lab was about flow line mapping. Our objectives were to learn how to assess design issues for flow line mapping, to use Microsoft Excel to calculate proportional line widths, and to construct a flow line map using proper design techniques and using styling and visual effects. Flow maps show the movement of phenomena between locations, often using lines of varying widths to show the amount of movement. This week, our objective was to create a flow map of immigration to the United States from around the world. For this map, we used CorelDraw exclusively. The flow lines are easier to create in CorelDraw vs. ArcMap as they need to be hand drawn. Another reason to use CorelDraw is that we wanted our flow lines to have proportional thickness.

This was a really interesting lab in that I feel I have enough of a handle on CorelDraw to spend some time on some of the visual effects. I spent quite a bit of time looking at the different transparencies on the map background and getting one I liked. First, I needed to decide which basemap I wanted to use. I saw a few examples and saw a couple I really liked using the United States as the center with the continents around it, and I wanted to go with that idea. What I did to break apart the continents was to use Object Properties and select all the objects for a particular continent; they used different colors, so I mainly grouped them by color and matched the outline objects. I moved each continent a distance away from the United States layer, making sure to leave enough room for the essential map elements at the bottom. I used ColorBrewer to help select a good color scheme for the states in the United States layer. Once I had my continents and the United States placed as I wanted, I created the legend by using the text tab and the shape tool to create the text and rectangles. I then used Microsoft Excel and the equation:

Width of line symbol = (maximum line width) x (SQRT value / SQRT maximum value)

to find the proportional width of the flow lines. I went back to CorelDraw and created the flow lines and the text labeling the continents. I used drop shadow and extrusion to make the flow lines stand out more. I also used transparency on the main layer to give the overall map a more professional look. The transparency took a little bit, as there are different types, and I had to manually fiddle with the colors to minimize the washing out effect the transparency tended to have on my map.

I really enjoyed this lab. I learned a lot more about CorelDraw and am much more comfortable with the program now, and I'm looking forward to the next lab.



Friday, March 6, 2015

Module 8 - Isarithmic Mapping

In this week's lab we were learning about isarithmic mapping. It was one I was particularly looking forward to, as I am a meteorologist, and we use these a LOT. We learned about the PRISM interpolation method (I had heard of their datasets before but not had the opportunity to use them until now), different types of symbology for isarithmic mapping, and worked with continuous raster data. We also learned to employ hillshade relief and create contours.

What is PRISM? PRISM stands for Parameter-elevation Relationships on Independent Slopes Model. It's an interpolation method that incorporates elevation into the surface using a digital elevation model (DEM). It calculates a climate-elevation regression for each grid cell, where monitoring locations are assigned "weights" depending on the physiographic similarity of the station to the grid cell. These weights are determined by several geographic and meteorological factors. PRISM then develops spatial climate datasets to show short- and long-term climate patterns. This method is especially useful in the mountainous and coastal western United States, where there is often sparse data coverage, rain shadows, and/or sharp elevation gradients. For more information on PRISM and to access datasets, visit www.prism.oregonstate.edu.

The two types of symbology we learned were continuous tone and hypsometric tinting. Continuous tone symbology describes when colors and shades of gray smoothly merge into neighboring colors or shades, as opposed to sharply defined boundaries. Hypsometric tinting describes a technique where colors or shading are used to depict ranges of elevation, usually with contours. It allows the viewer to associate light and dark hues with low and high values, respectively.

Below are my two maps of average annual precipitation of Washington from 1981-2010. These were both created entirely with ArcMap. The first one created is using continuous tone symbology. I added the data from the PRISM dataset to ArcMap and made sure the data was already in the stretched color symbolization, and selected the precipitation color ramp. I modified the labels so that they show three values: the bottom number being the smallest value and the top number the highest. I also made sure to use "hillshade effect", as the PRISM raster data incorporates elevation into its surface. I then owned the map by adding the essential map elements: legend, north arrow, scale bar, etc., and I added some text about the PRISM interpolation method.


For the second map, I used the same template as for the continuous tone map. This was the hypsometric tinting map, and for this one I used contours. This was interesting, as I needed to use the spatial analyst tool to convert the raster values from floating to integers. This allowed for nice contours because cell values were truncated to real numbers. I also gained more experience with manually classifying the data. I used 10 classes and set the range as described in the lab. I again made sure hillshade effect was checked on, and used the precipitation color ramp. I created contours for both maps, but decided to use contours on the hypsometric tinting map only. I like the fact that creating contours using the spatial analyst tool is relatively user-friendly.


I thought these maps both look very sharp and do a good job telling the viewer at a glance the important information. I personally like the hypsometric map a little better due to the ease of reading the legend. I can look at a glance at a location on the map and see the numerical range of annual rainfall that color falls in. It's more difficult to do that with the continuous tone map, although both do an excellent job of showing exactly where the rainfall is. It's easy to see the smaller amounts of rainfall east of the mountains and the greater amounts to the west. I really enjoyed making these maps and am looking forward to next week.

Wednesday, March 4, 2015

Module 7/8 - GIS Data Search

This module was basically a lab in which we created maps with much less step-by-step instruction. Our objectives were to learn to select GIS data to meet the needs of a project, download and manage  data from multiple sources, and to create maps using the data from multiple sources.

Each student was assigned a different county in Florida to map. We were provided a list of several data layers that were mandatory to use in our map and 4 environmental data layers (of which we had to choose 2). We also had to use a digital elevation model (DEM) and a DOQQ.


In this map, I chose to display a lot of layers. It has 3 data frames so that not too much is in any one place. The bottom left layer displays the cities and towns and major roads. The city of Tampa lies in this county, so there is a rather extensive road network which I had to clip to just the major highways. I also clipped the cities where I only displayed larger and medium sized cities and towns. The bottom right frame shows the land cover of Hillsborough County. I used a light to dark green color scale for this. The top left scale shows the invasive species layer, with the county boundary in red. I chose to only use one invasive species for this map (Caesar's Weed) so that the map would not be too busy. I placed all 3 of my legends in the top right frame, ensured all my data was in a single map projection, and made sure all the essential map elements were there.



This map is another map of Hillsborough County. This shows the digital elevation model (DEM), the state parks (public land), the surface water in the county, and the DOQQ I selected. I wanted to choose a color that stood out for the DEM as opposed to the gray scale, I chose the bright green color scale for the state parks and blue for the water, and placed an inset of the DOQQ beside the county, so its a little easier to see. I ensured the map layers were in the same Albers Conical Equal Area projection and placed the essential map elements.

Overall, I enjoyed this lab, but it was challenging. Downloading and reprojecting the data (if I needed to) wasn't too difficult, even though some of the data was difficult to find. The most difficult part for me was in deciding how I wanted to lay out the maps. I needed to make a decision quickly as things came up and I didn't have as much time as I would have liked, but I think they came out well in the end. I learned a lot, especially about data selection. I look forward to next week's lab!