In the first part of the lab, we learned to create a classified image in ArcMap from Landsat satellite
imagery using the Iso Cluster tool. This allows us to set the number of
classes, the number of iterations, the minimum class size, and the sample
interval as desired. Using the output from the Iso Cluster tool, we then use
the Maximum Likelihood Classification tool, using the original Landsat image as
the input and the output from the Iso Cluster step as the signature file, and
the new classified image is created. I
assumed at this point we would be done with the unsupervised classification,
but we now need to decide what the new classes represent. At this point, we are
exploring a smaller image than the one we deal with later, with 2 vegetation
classes, one urban, one cleared land, and one mixed urban/sand/barren/smoke.
In the second part of the lab, we learn
to use ERDAS Imagine to perform an unsupervised classification on imagery. I
think it’s a little more complicated in Imagine than in ArcMap. First we perform the
classification using the tool Unsupervised
Classification in the Raster
tab. For this exercise, we choose 50 total classes, 25 total iterations, and a
convergence threshold of 0.950 (basically a 95% accuracy threshold), and we use
true color to view the imagery. Here
is where the lab got a little time-consuming. Now we have an image of 50 different
classes and we want to get it down to 5 (same number as in exercise 1).
Examining the attribute table displays the 50 classes, and we need to pick all
50 classes of pixel and reclassify that into one of our 5 classes (in this
case, trees, grass, urban, mixed, and shadows). The lab suggests we start with
the trees, selecting every pixel that is associated with a tree and change that
to a dark green color and the class name of trees. So, I started with the
trees, then moved to buildings and roads, then to shadows, and finally to grass
and mixed areas. To me, the grass seemed somewhat brown, and there may be quite
a bit of grass labeled as “mixed” in my map due to this. After
all 50 categories were reclassified, I had to Recode the image by selecting the
areas for each category and giving them all the same value (1, 2, 3, 4, or 5). As
there were no pixels labeled as unclassified, I classified that as part of the
“shadows” classification so that it would not display in the legend. Once this
was done, I saved the image and Recoded it. I added class names and an area
column in the attribute table. Finally, we
were asked to calculate the area of the image and to identify permeable and
impermeable surfaces as a percentage of the total on the image. Permeable
surfaces allow water to percolate into the soil to filter out pollutants and
recharge the water table. Impermeable surfaces are solid surfaces that don’t
allow water to penetrate, forcing it to run off. So, I decided that the
buildings/roads are 100% impermeable, the grass and trees are 100% permeable,
and the Mixed and Shadows are 75% permeable and 25% impermeable, which I
determined just from taking a look at the overall image visually. I feel this is a pretty good approximation of land use in and around the UWF campus based on the original imagery. The only thing is as much of the grass seemed more brown than green, I may have classified some of that as "mixed" instead of "grass", but overall this looks pretty good, Below is an image of my final map.
No comments:
Post a Comment