Thursday, December 3, 2015

Arc Collector

Introduction:
Many of us have a GPS that we can take out and collect points to then go back and add attributes to that point later on. Others may have the ability to TYPE out the attributes right in the field, this however is a tedious process that is not realistic when you have a short time frame to collect data. There is a solution though... Using ArcCatalog, ArcMap, ArcGIS Online, and Arc Collector we can be proactive and create the spatial layer in the lab so that when we go out to collect the data, the process is made much more efficient since we can now collect the point and SELECT the attributes from domains instead of typing them out. This is done mainly through the use of Arc Collector which is an App that you can get for your smartphone, therefore making your smartphone more powerful than your friends expensive GPS.

Study Area:
 My study area that I will use to demonstrate this process is my family farm located in eastern Eau Claire County (Figure 1). I took these points during opening weekend of the 9-day Gun Deer Season. I tend to want to enjoy the outdoors when I am hunting so many of the wildlife I encountered did not get recorded since I did not have my phone on me or available.
Figure 1: A map showing the location of my family farm relative to the rest of the county


Methods:
To begin we first need to decide what we are mapping. Once you have narrowed that down we can create a feature class in ArcCatalog and start the process of creating our domains. Domains are basically rules for what we can and cannot enter while in the field. We can set the min and max values for data entry to ensure that we don't enter a number that is so far out of whack that we cant even figure out what digit to remove. We can also set what field attributes we are going to enter data for and what specific data values we can select. This vastly expedites the data collection process and makes a geographers job much easier.

Once the domains are created for that geodatabase we then open up our shapefile in ArcMap so we can sign in to ArcGIS Online and publish the map to our enterprise account with the university. This then allows us to go through the process of uploading the map/shapefile to the cloud so that we can then access it through the ArcCollector App. Doing this can be difficult and many people have problems because if you miss one direction to help you upload the layer, you wont be able to open it on your phone. But for those of you that know the advantage of ArcGIS Help, you can go through it in a breeze.

Now that you have your shapefile uploaded to the cloud, you can grab it from your Arc Collector App and begin taking points for whatever it is that you are mapping. It is really simple, all you have to do is take the point by pressing collect, and then It gives you a drop down menu of all the fields and preselected attributes that you assigned to be options for those fields. This makes it so much simpler and faster when you are outside in the cold and you don't want to lose feeling by typing out long field entries.

To get this data back onto ArcMap, you go back to ArcGIS Online and open up your map and then download your shapefile that you have been working with back onto your desktop. The ArcCollector updates in real time so you do not have to push any data from your app to the online account. From here you can create your maps just like you would with any other shapefiles!


Results/Discussion:
Like I said earlier, the main purpose of this blog is to demonstrate my ability to show how to use ArcCollector, so for that reason I did not go batshit crazy and map out every single car in a parking lot. I did however spend a lot of time creating my domains (Figure 2) so that whatever wildlife I did encounter over the gun deer season, I had a attribute already created for that animal. This was a rather long and tedious process but it was worth it since I feel confident that I can give a first time geographer this app and shapefile and send them on their merry way and be confident as well that they don't come back asking any questions because this is as straightforward as it gets. The resulting map is shown below in Figure 3.
Figure 2: Screenshot showing an example of one of my domains titled maturity with the selection options listed below

Figure 3:  Map demonstrating the different point locations that I took with the attributes assigned to them

Conclusion:
All in all this ArcCollector app is the way of the future. It is simple to set up, use, and makes the once long and tedious method of data collection now much simpler. Anyone can use this app, but it does take someone with a geospatial mind to create the geodatabase and feature so we can properly set up domains that make sense. There are a few tweaks here and there that ESRI could fix such as the Time/Date Stamp (It just gave me a weird equation) and the problem that every time you set up the geodatabase domains, if you screw up then you have to start all over again. Besides this, the App could be used for a vast variety of different applications by different users which makes it very very powerful!

Sunday, November 15, 2015

Creation of a Topographic Survey with a Dual Frequency GPS and with a Topcon Total Station and Tesla GPS unit

Introduction:
This blog is a culmination of two weeks of activities that include various topographic survey methods. The first week we conducted a topographic survey using a dual frequency GPS and the second week we used the Topcon Total Station. For both of these methods, we had to become familiar with the units as well as go out and collect data points so we could display the topographic surveys through ArcMap.

Study Area:
Our class was tasked to collect these topographic data points in the Campus Mall, more specifically the Campus Amphitheater. We used the same location for each survey method so we could compare results along the same surface. This location is for the most part relatively flat but in some areas it does provide us with the change in topography needed to make topographic maps that stand out. This makes it easier for us beginners to practice using this equipment in easy terrain. As shown by Figure 1 below you can see that the study area is quite small compared to something of the Priory for instance. This will also allow us to eventually combine our results to make a larger or more detailed topographic map.


Figure 1: Map showing the location of the campus amphitheater in the UWEC campus mall 

Methods:
Like stated earlier, we used two separate methods in creating our topographic survey, one with a dual frequency GPS and the other with the Topcon Total Station. I will explain in detail both of these methods and we implemented them.

Survey with a Dual Frequency GPS
 To conduct this survey we needed three essential pieces of equipment; The Topcon Hiper, Topcon TESLA, and a MiFi wireless router. These three pieces are connected to each other in the following ways; The Hiper is screwed on top of the surveying tripod with the TESLA unit attached midway on the rod so we could use it comfortably, and the MiFi had to be close enough so the TESLA could pick up its signal to connect to the internet.

Now that we properly connected all of the equipment together, we could begin our survey. To do this we had to create a folder in the TESLA unit so under Topo survey. Once all the parameters had been set we could begin data collection, of which we had two different options, the easy way or the hard way... The easy way or the "quick" way was less accurate but quicker because it takes anywhere from 3-5 points to average out where as the hard way or "solution" takes anywhere from 7+ points to average out which creates a more accurate survey point. When you use the solution method, you are also given the option to save or not save the point depending on how happy you are with the accuracy. In the interest of time we collected using the quick method because we needed to collect 100 points in a short amount of time.

The class had a problem with the universities subscription on the TESLA and as a result we could only use the unit while it was in Demo mode, which meant that we could only take a maximum of 25 points per job. This was no big deal since we just had to create four separate jobs, which also allowed us to become more familiar with the interface. Another point of interest is that every time we moved the tripod to a different point, we needed to level out the rod so that our accuracy was not thrown off.

Survey with the Total Station
The second way of surveying that we learned was using the Topcon Total Station. This system was mounted on a Tripod and once again connected to the TESLA unit. The advantage of the Total Station is that it corrects for elevation, GPS location data, and distance. This method is very similar to the distance/azimuth method of surveying we did earlier in the year. If  you would like a refresher on distance/azimuth surveying methods then click the link to my blog for that post. http://nikanderson336.blogspot.com/2015/10/distanceazimuth-survey-methods.html

To set up the total station we first had to take two very important points. The first being the occupy points which is where the Total Station will be located and will be the "home base" for the remainder of the data collection. The other point that we had to shoot was the backsight point, this is essentially the "zero" for the Total station to use for calculating azimuth values. After we collect those points then we must put the Total Station over our occupy point and begin the tedious process of leveling the Topcon out. We must first make sure that the Total Station is at a height where it is comfortable for all group members to use without crouching or getting on your tippy toes. Once it is at the desired height we would roughly level out the tripod but using the actual tripod legs and the common level that is found on the Total Station. Once the rough level has been achieved we then moved onto a more precise level which required a very detailed look into the three sides of the base of the total station. We had to make sure that once we got one side leveled, we would turn the total station around to the other side but only use the one dial on the right side of the Total Station that we didn't touch previously. Otherwise every time we leveled out one side we would screw up the previous side we just leveled.

So now that everything is all set up we can begin data collection which is quite simple yet not easy. One person would man the Total Station while the other would take the reflector rod and move around to various points along the terrain that we are trying to get the Topo survey from. That person that is holding the reflector rod has to hold perfectly still so while the other partner is meticulously aiming the laser at the reflector, the reflector doesn't move and make the Total Station have to play keep up.

Once we completed surveying, we would export the data onto a flash drive off of the TESLA unit and then import that into ArcMap, which would allow us to make the great topographic products shown below. We did this the same way for both surveying methods.

Discussion/Results
We can see from the maps below (Figure 2 and 3) that both methods will get us a decent rendition of a topographic surface. The next questions we would ask would then be; how much money do you want to spend, what terrain are you going to be in, and what kind of accuracy are you trying to achieve? Each system has its advantages and disadvantages, but your decision as which to use should be solely based off of the situation that calls for the topographic survey in the first place.

Figure 2:  Map showing the topographic surface created from using the Dual Frequency GPS
Figure 3:  Map showing the topographic surface created from using the Topcon Total Station

Conclusion:
When it comes to making topographic surveys, the first questions that you need to ask are where and what is the intended accuracy. Throughout the semester we have now learned many different ways to create surfaces within ArcMap and various ways of how to survey those surfaces as well. Yes, in a perfect world you would like to take the Total Station wherever you go because of its accuracy and its simplicity but that thing is heavy, bulky, and wouldn't work out in the bush. For this reason you must decide which survey method best suits your project and then go from there. Both of the methods we used in this activity are great for the locations where we used them but if we were on a steep slope or far away from any accessible terrain, we would be shit out of luck if we had to carry this equipment around and then level them every time we wanted to take a point.

Friday, October 30, 2015

Navigation with Map and Compass

Introduction:
This activity was the second part of a two week demo on field navigation. Last week we constructed our navigation maps and focused on the coordinate systems and grid strategy, this week we got to actually go out in the field to test out our navigation skills. We had the same groups as before and as a class we met in the parking lot at the Priory so we could go over some details on how the mission was suppose to be carried out. The task at hand was to navigate using our map and a compass to 5 different points within the Priory. We were given a GPS so it could track our movement as well as the coordinates to each of the points so we could plot those points on our map before heading out.

Study Area:
As described in the previous lab, the Priory is a piece of property owned by the university foundation located south of town (Figure 1). We got to the Priory at 3pm and left just before 6pm after the whole class returned from the wilderness of the Priory. It was cloudy out with winds out of the East at about 5mph with the temperature at 50 degrees Fahrenheit. One thing to note is that the priory is mostly wooded terrain and as we all know that could cause trouble for our GPSs but luckily for us, the leaves have mostly fallen by now. This gives us the optimal time to check out navigation maps and plotted points against the actual coordinates that we were given. My group was group number one and we were given five points with the coordinates in UTM (Figure 2).
Figure 1: Location of the Priory relative to our university campus
Figure 2: Map showing the five points that we were tasked to navigating to

Methods:
Our methods were pretty straight forward, we were given a compass, Garmin Etrex GPS, and a 11x17 printed out version of our navigation maps that were constructed a week prior to this activity. The only other thing we received was some instruction on the compass and our coordinates in the format shown below in Figure 3.
Figure 3:  A snapshot of our coordinates in UTM

We then took the above coordinates and plotted them on our map using the grid system that I created. Once the points were plotted we were able to take our compass and determine the azimuth from our starting location to our end location. We wrote down the azimuth in degrees as well as the distance from one point to the other so we knew how far we needed to pace. Evan was our pace counter and his 100 meter pace count was 61 steps. Using this we took however long the distance was from point to point and then used that number as a percentage and multiplied it by .61 so we got the correct number of steps that Evan had to pace off when walking through the brush. With all of this information in hand we set out to look for our first point. Evan was the pace counter, Grant was the leap frogger, and I was the azimuth control. I told Evan where to go and how many total paces he needed to walk and he walked to the furthest point that we could tell was still in line. We needed to stay as straight as possible from the start point to the end point and having three people made that easier than if there was only one or two of us. This process was repeated every time we reached our plotted point.

Results:
This is a great time to transition into our results. I just stated that our process was repeated every time we reached a point. Well the catch is that we never reached any of our points. Out of the five points that we had to find we navigated to none of them, however after we threw in towel and decided to head back to the parking lot, we randomly stumbled upon our second point. While at this point we took out the GPS and marked the coordinates in UTM according to the GPS so we could compare them to the coordinates that we received prior to the start of the lab. Figure 4 shows the GPS/UTM coordinates right next to our point and then if you look just above the GPS coordinates you will see the coordinates that we received prior to the start of the lab.
Figure 4: Comparison of the GPS coordinates of Point 2 to the coordinates we received at the beginning of the lab 
You can see from Figure 4 that the actual coordinates of the location are quite off. The point is acutally of 10 meters to the West and 32 meters to the South. Even though this may not seem significant, the purpose of this activity isn't to be 100% accurate and walk exactly to the tree marker. Many other groups got close enough and just saw the tree marker in their general region, but when you can only see <10 meters in any direction, being that far off can really screw with you. We were told that not all of the markers may still be there because they could have gotten torn down, but the problem is once you don't find a point, it is hard to move onto the next point because now you don't have an exact starting point anymore. Yes you could look at the GPS and replot your position but it doesn't matter if the end points are not in the general area they are suppose to be in.  Figure 5 below shows our groups path through the property and where the points are that we were suppose to find according to the coordinates given. I also added a point where the actual location of Point 2 was so we could compare how far off the points were.

Figure 5: Map showing the path that our group took to navigate towards all the different points. You can tell that we got right on top of Point #2 but still didn't find it until we randomly walked upon it as shown by the bright green dot.


You can see from the Figure above that we didn't even come close to locating three of the five points. That is because we were still stuck on figuring out how to find the second point. When we arrived at the location of where we thought point one was going to be located we assumed that the marker just wasn't there and that it was torn down, but we were fairly certain that we were in the right location so we just moved on to the second point for the sake of time. When pacing out the second point we came to the end of our pace count and saw that once again there was no marker to be found. This is when we tried to use the GPS to walk to the coordinates of our second point as described by the coordinates we received on paper. This failed miserably as we just got turned around and around, never actually finding the marker, even though according to the GPS, we were right where we needed to be. This caused us a lot of frustration and we said that we were just going to return to the parking lot where we would regroup with everyone so we could try and figure out the problem. It is now when we stumbled upon a random tree with a marker on it that to our surprise was labeled 1:2 which stands for Group 1 Point 2.  This was the perfect ending to a roundabout evening that we just had trying to find point 2, luckily we did or else we would not have gotten the data that we did and we wouldn't have been able to compare the coordinates.

Discussion/Conclusion
A previous class was tasked to finding and marking the location of the tree markers and it seems as though they failed miserably. This activity was purposeful and I got the gist out of it, but it was troublesome and very aggravating because no matter what we did, we were just walking blind because the coordinates of our target locations were incorrect. This made for a very poor representation of how we can navigate according to Figure 5, however we walked out of this activity with a very good understanding of how to navigate and really that is what matters. We had to troubleshoot and critically think why this activity wasn't working and in doing so, we got a lot of practice with azimuth control as well as map interpretation. This made the activity very worthwhile because we learned that not everything is in the right location and even though you may be doing everything right whens it comes to navigating, you could be completely lost, which is what we were.


Saturday, October 24, 2015

Navigation Map Construction

Introduction:
This activity is to set up another activity that we will be performing later in the semester. We will eventually be navigating at the Priory, but first we must make two navigation maps that will assist us. One map is using Universal Transverse Mercator (UTM) and the other is using World Geodetic System (WGS). When using UTM, we will be navigating by using meters and when using WGS, we will use decimal degrees. We are split into groups of three and by using these maps, we must navigate to different locations within the Priory.

Study Area:
What is the Priory you may ask? Well to keep it short, it is a 110 acre piece of property that the University Foundation bought. It used to be the old St Bedes Monastery and now it is overflow dorm for college students and for the most part, a waste of space. The property to my knowledge is not used for much other than geography labs and "hiking" (if you can call it that). Regardless, it is located 3 minutes south of Campus on Priory Road, just south of highway 94 (Figure 1).

Figure 1: Location of the Priory relative to our university campus
Methods:
The first thing we did for this lab was to calculate our pace count. A pace count is basically a backwoods way of measuring distance when measuring tape is not available or practical. Most pace counts measure out to 100 meters and that is what we did as well. My pace count ended up being 61 steps per 100 meters, considering that for every time that my right foot touched the ground it was counted as a step. Pace count is something we had to keep in consideration while creating our maps.

To create our navigation map we used ESRI ArcMap 10.3.1 and used a Priory Geodatabase that already had layers for us to use. The creation of the maps and its features was quite easy, the only part that took some tinkering was the labeled grid that we all had to use. I created my own grid from scratch so I could make it as I saw fit for the upcoming lab. As said above, we had to create two maps in two different projections, one was in UTM and the other was in WGS. I gave both projections the relatively same grid system. UTM had spacing of major grid lines of 50 meters while WGS had the equivalent at 500 decimal degrees. On each map I made some of the coordinates a lot smaller if they were the same for the whole grid, such as 44 and 91 degrees for the WGS map. This allowed the smaller more detailed coordinates to stand out so they were easier to read once we were in the field. On each map I also added three sets of scales. The first scale was just a basic bar scale that represented how far 100 meters was (the length of our pace count), the second was a representative fraction (such as 1:3,100), and the third was map scale (1 centimeter on the map equals 31 meters in real life). These scales will help us when we are plotting our points as well as when we are in the field and are wondering how away we are from our destination. My maps also had a outline of the Priory Property to ensure that we knew where the absolute boundary of our activity went. Of course each map was fitted with a North Arrow as well as a set of Coordinate and Projection data. The final touch on the map was a layer of 5 foot contours which will help us have some idea of the lay of the land when we get out there.

A little more on the UTM coordinate system; It is a coordinate system set in the North American Datum (NAD) and is broken up into different zones based on varied longitudes. The zone that I used was zone 15N. 15 is the number of the zone from the starting point on the International Date Line and the "N" obviously stands for North because we are in the Northern Hemisphere. Figure 2 below illustrates the many zones that UTM has within America, each one being 6 degrees across. One of the advantages of UTM is that it is in meters and it has an absolute origin point. Within UTM there is some called the False Easting and False Northing and each zone has its point of origin starting on the intersection of the equator and the central meridian of the zone. However to eliminate negative numbers, the origin is set of to the east 500,000 meters.
Figure 2: A map showing the 10 different UTM zones within the contiguous United States
WGS on the other hand is the conventional decimal degrees coordinate system. Nothing fancy with this on other than it is handy because this is what all GPS's record points in. This system was designed by the Department of Defense while the other (UTM) by the Army Corps. I would venture to imply that the reasoning behind this is that the Army Corps is more focused on individual projects on a small scale, needing only a small coordinate system. Where the DOD needs a coordinate system that will work no matter where in the world they are conducting business. WGS is great when working with projects on a Global Scale where UTM would have a great degree of distortion if not used within the correct zone. This goes to show you that every project calls for its own coordinate system.


Discussion/Results:
The final result was a map that was fit for any navigator. One of the hardest parts of this activity was finding the middle between too much and not enough. You want to put everything necessary on your map but not so much that it clusters your map and makes it hard to read when your out in the field. I always try and make my aerial rather large so we can see it better but I also left enough space around the margins so I could put supporting data such as the grid and scales. These maps were to be printed out in 11x17 sheets of paper for the upcoming navigation lab and our groups had to pick one of our maps to be used by the entire group. I was fortunate enough to have my group select both my UTM (Figure 3) and WGS (Figure 4) map to represent our entire group when we were out in the field.
Figure 3: UTM Navigation Map to be used during our navigation exercise

Figure 4: WGS Navigation Map to be used during our navigation exercise
Conclusion:
 After creating this maps, there is NO doubt in my mind that the UTM coordinate system is more reliable and more practical for this activity. Decimal Degrees are great because they can get very accurate since you can just keep adding significant digits, but when your out in the field, being able to look at your map and quickly discern how far you are away from your target objective is the real deal. For this reason, I would hope that my group is able to use my UTM map due to its practicality and efficiency. I will note that both of my maps shown above DO have grid lines in place, but they just do not show up the best when exported in JPEG format. For this reason, our Professor is printing the maps out for us after we send him the maps in PDF format. Happy Navigating!













Friday, October 16, 2015

Unmanned Aerial Systems Mission Planning

Introduction:
This activity is a short course in Unmanned Aerial Systems or UAS. We will discuss the basics of UAS and the different platforms that are commonly used. This activity will take us from beginning to end, from mission planning to the actual flight to image processing and interpretation. We will not go into the detail that we have went through in the actual UAS course here at the University but if you would like to know more about UAS, you can check out my blog here: http://nikandersonuas.blogspot.com/

Study Area:
Our study area for the image gathering portion of the activity is on the point bar across the Chippewa River from campus (Figure 1). There is a big 30x30ft "24" that some kids made in the sand with rocks and other sediment. I do not know what this resembles but I do know that it made for a quick study area that we could capture imagery of to then process later on.
Figure 1: Map showing the study area in relation to the City of Eau Claire. NOTE: This imager was taken during high water and does not reflect the water level at the time of this activity (It was much lower)

Methods:
The first thing that we did during this lab was getting introduced to UAS and the different platforms that we work with. Professor Hupy took us through some of the advantages and disadvantages of both multicopters (Figure 2) and fixed wing aircraft (Figure 3). For example here of some advantages and disadvantages of each:
Multicopter Advantage: Can fly in a variety of locations with a small turn radius and no runway.
Multicopter Disadvantage: Is not suitable for covering a very large area and has a shorter flight time.
Fixed Wing Advantage: Can cover large study areas and has a longer flight time.
Fixed Wing Disadvantage: Needs a runway or larger area to take off and has a large turn radius.

Figure 2: A Hexacopter

Figure 3: A fixed wing aircraft
This is a great transition into mission planning because in order to effectively plan a mission, you must know some of the advantages and disadvantages that I just went over. For instance, our mission was to collect imagery over some rock formation on the sand bar. Since the rock formation was going to be relatively small, the multicopter option gets a point. Since we were going to need to be focused on the rock formation and collect many different images, multicopter gets another point. Finally, since it would have been difficult for a fixed to land (since all there was were water and sand) fixed wings lose a point. All in all, we decided multicopter to be the option that best suited our study. Once we had that figured out we could go into the mission planning software called Mission Planner and start mapping out our study. This software is very easy to use and we simply tell the UAS where we want it to fly, how high the altitude should be, and other parameters such as how fast the UAS should be flying. The interface looks something like Figure 5 and simply requires us to draw a box around our area of interest. We can go further and tweak the parameters because the software will also tell us important details such as how long the mission is going to take and how many pictures will be taken at this speed. These are all very important when planning out a mission because you want to make sure that your batteries can last long enough for the mission to complete and you don't want not enough or too much data. Figure 6 below is a good representation of how we can change parameters on the planning software. In this example I switched the angle that the UAS would fly at from 90 to 0 and by doing so I told the UAS to make longer swaths and therefore decreased the amount of total time it would take the UAS to make it turns. This can significantly cut down on flight time.

Figure 5: Mission Planning software showing the proposed flight path of the UAS
Figure 6: Example of how changing the flight angle (bottom right) can decrease flight time
Once the Mission is planned, we can relocate to the study for the launch of the UAS. Now usually we may have a multicopter checklist that we can go through to make sure everything is in order so that we will have a safe flight for both the UAS and more importantly, bystanders. This multicopter that we were using is a relatively small UAS platform called the Phantom (Figure 7).  The Phantom is a quadcopter which means is has four rotors on it. It is also self correcting so while the flight is in progress, if we switch to manual (as we did for this study) we can easily fly the UAS and it will correct itself. This basically means that when one rotor starts to drop or loses RPMs, the other rotor will make up for it and gain RPMs. The pilot for this activity was our Professor and he flew the Phantom over the "24" and took pictures manually (Figure 8).

Figure 7: Photo showing the Phantom while our Professor explains some of the specs of this UAS
Figure 8: Image displaying the flight in progress

Once the flight was over and we collected all of our data, we head back to the lab for a brief introduction of how to process these images. The software that we use for this is called Pix4D and if there is one thing you need to know about image processing with this software is that it will take a very very long time. Our professor gave us the know how of which buttons to push and after that It was up to us to process the images ourselves. When I processed my images it took me around two hours to do in the "Gucchi" lab. The end result of the processing from Pix4D (Figure 9) is a few folders that they automatically create in your own student folder. The two main resulting products that we got were one DSM (Digitial Surface Model) (Figure 10) and a Mosaic (Figure 11). The DSM can give us a good idea of the elevation of our surface while the mosaic compiles all of the photos together into a high composite image of our feature "24". I had an issue getting a DSM of the entire study area, either this was an export issue or this is just how it is and I myself have to mosaic those DSMs together. On the other hand, the mosaic that I got was in excellent condition and had a very high resolution.
Figure 9: The end result of the Pix4D processing
Figure 10: The DSM result of the processing in tile format. You can see that the left of the photo is green which stands for lower elevation, which makes sense because it is closer to the river.

Figure 11: The resulting mosaic of the processing

The last part of the activity that we had to do was to put in one hour on the Real Flight Flight Simulator. We were tasked to do one half hour with a fixed wing and one half hour with a mulitcopter. I have already put in many hours on this simulator for the UAS class so I should be able to quickly explain how these platforms feel when flying. The fixed wing in my opinion is the easiest and most fun to fly. You can fly these in first person view or third person, with third person being the most realistic. Usually you have a three or four channel plane when flying fixed wings. The three channel planes consist of a rudder, elevator, and throttle while the four channel planes consist of a rudder, elevator, throttle, and ailerons. The rudder controls your Yaw, while the elevator controls your Pitch, and the ailerons control your Roll (Figure 12).  Obviously your throttle controls how much power you are giving the rotors and how fast you want your plane to fly. One thing to note with the fixed wings is that they are quite touchy and it doesn't take much to maneuver them so when you are flying, you want to take very small controlled movements.
Figure 12: Diagram illustrating roll, pitch, and yaw on a fixed wing

The multicopter is a little harder to fly because it doesn't always fly in the same direction. Planes are always moving forward while multicopters can move up and down, forward and back, and side to side. It is also hard to distinguish where the front of the UAS is because they are typically symmetrical in shape. These are also touchy but the hardest hurdle to overcome is know which way the multicopter is orientating itself and from there, pushing the right sticks to make it fly where you want it to go. One advantage these have is that you can fly them from multiple camera angles. They can fly in first person, third person, chase, or through the "bombay" doors.

Discussion:
If you told me I could only have one platform and I had to do as many geospatial tasks as possible with it, I would pick the mulitcopter. I would do this because you can control them so much easier and you do not need a lot of room for them to operate. With the Mission Planning software that we have, we are able to plan out the UAS's flight path ahead of time and not worry about pilot error when in flight. Yes, we would always have a pilot on hand so they could take over in case something went wrong, but for the most part, you wouldn't need to. The multicopter is able to take a variety of sensors that you could use for a lot of different studies and monitoring. With all this being said, I do recognize that every situation calls for its own platform and I think the fixed wings are great when you have a large study area and you need to be flying over 10mph so that you can get it done quickly.

Scenario Discussion:
We each had to pick a scenario that our professor has come across through consulting work and give our own consulting report. The scenario I picked was:
A pineapple plantation has about 8000 acres, and they want you to give them an idea of where they have vegetation that is not healthy, as well as help them out with when might be a good time to harvest.

Mr. Dole,

I have reviewed your project goals and have came up with a gameplan on how we can get this project rolling. You noted in our last email that you wanted to know where your crops were not healthy. The solution to this problem is using our fixed wing UAS and attaching a NDVI monitor to it. NDVI stands for Normalized Difference Vegetation Index and it gives us a very good representation of how healthy your crops are because the sensor looks at photosynthetic activity. Basically the sensor shoots visible light down on the vegetation in the red spectrum. Healthy vegetation reflects more red light while vegetation that is not healthy absorbs the red spectrum. The end result will be a collection of maps that display healthy vegetation as green and unhealthy vegetation as red. This will be simple to interpret but I can always go through the results with you anyways. If you would like to learn more about NDVI then you can click on this link: http://earthobservatory.nasa.gov/Features/MeasuringVegetation/measuring_vegetation_2.php

The other issue that you wanted to address is when you should harvest. Pineapple as you know should be harvested when they are a rich yellow. So what I can do for you is to develop a sensor that is going to shoot visible light down at 5.8 micrometer wavelength which will pick out yellow the best. What will happen is the pineapples that are a rich yellow will give us a high value because they will reflect more of the light. The result will be a series of maps that show you where the highest value pineapples are that are ready to harvest. This will save you time and money because you will be harvesting at the most optimal time.

Both of these sensors will be flown with our fixed wing aircraft. Due to the large study area I need a platform that will cover ground quick but also maintain a high quality image. I will need to take off and land multiple times so that I can switch out batteries and start downloading some of the images off the sensor since this is going to be a data intensive project. After all of the images are taken I will process them and give you a final report on your 8000 acre plantation.

Let me know what you think and when I can start,

Nik Anderson



Conclusion:
This activity hit all the basics of UAS, from the different platforms to the different applications and analysis. I believe that UAS is going to be a difference maker in many realms. They already have studied so much and helped so many people, I just hope that the UAS community will stray away from any negative publicity because that will only further discourage people from allowing them to operate. Furthermore, I think that knowing all the different steps of how to operate UAS will greatly improve any chances that I may have to work with them some day. It is not enough to just learn how to fly them, because most scenarios are going to call for auto pilot situations anyways. Knowing the different platforms and the planning software will make any missions that I may fly far more efficient.


https://howthingsfly.si.edu/flight-dynamics/roll-pitch-and-yaw
http://www.multiplazaonline.com/Honduras/San_Pedro/logos_Sn_Pedro/H1_SuperJugos.htm
http://diydrones.com/profiles/blogs/full-autonomous-cross-country-soaring-flights-with-the

Friday, October 2, 2015

Distance/Azimuth Survey Methods

Introduction: During this activity we were introduced to a new form of surveying for us, but a old form of surveying otherwise. When we cannot access GPS satellites or when our technology breaks down, and we need basic no nonsense data collection, we can use tools as simple as a tape measurer and a compass to be able to map out and delineate different objects in a study area. This form of surveying is used to determine the distance and azimuth of an object from a single collection point. We were tasked to use this method to map out an area of interest in our groups of two and to collect information on at least 100 objects.

Study Area: Our group decided to choose UW-Eau Claire's campus mall (Figure 1 and 2) and we focused in on two different control points within the mall. The first location I would describe as the middle of the campus amphitheater (Figure 2) while the other location was the south end of the rotunda between Schofield Hall and Centennial Hall (Figure 2) We thought these locations to be pertinent to our study since we could record various items on campus (blocks, light poles, signs, and trees) and overlay them on a old aerial of campus from when the mall was still being constructed. Our study was conducted on September 30th.
Figure 1: UW-Eau Claire's location in the City of Eau Claire

Figure 2: UW-Eau Claire's campus mall and the location of the two Control Points

Methods: To do this survey we had to propagate some various equipment. The first and foremost was the TruPulse 200 laser distance finder. This thing is basically a glorified rangefinder but has very accurate readings for distance (meters) and azimuth (degrees). Using this and an notebook we were able to shoot various objects and record their information along with a description of the actual object so we could map it later. Instead of writing down block or light pole every single time we shot an object, we just abbreviated it on the notebook (Figure 3).
Figure 3: Our table we created in our notebook to log all of the data

Now that we recorded all the information we transferred all the data into an Microsoft Excel spreadsheet so we could prepare it for importation into ArcMap. One additional bit of information that we needed to collect was the coordinates of our control points within the mall. Since this data is implicit, we did not record the exact GPS coordinate of every object, but rather just the coordinate of one control point so that we could see the relative location of all the other objects compared to the control point. Like I explained earlier, this is suppose to resemble what would need to happen, in the event of a loss of GPS when you needed to collect some points for further referencing. Explicit data would be the collection of all these objects with an exact coordinate, but we ain't got time for that! Anyways, to collect the location of our control points we simply went to google earth and found the exact crack in the rock that we stood on for each point and record that GPS coordinate in decimal degrees.

Now that we had a complete spreadsheet, we imported the table into ArcMap and began the Bearing Distance to Line tool. This tool takes our controls points and creates multiple lines from those control points to the locations of our objects as described by our distance and azimuth readings. This tool proved to be quite troublesome and some troubleshooting was required. One of the changes that I needed to make was saving the Excel file as a Tab Delimited txt file. The tool did not like the was the table was read in a regular excel format so changing it to a txt file made it possible. Once I did this, I was able to run the tool and got a file that looked something like Figure 4.
Figure 4: Map showing the result of the Bearing Distance to Line tool
 Once I glanced at the data, I could tell that all the lines were in somewhat appropriate positions relative to the control points. Now I needed to add the aerial basemap so I could provide further context to my data. I used imagery from the City of Eau Claire for this activity and ended up using specifically the three inch 2709_29NW raster file. This proved to be even more troublesome as the coordinate systems could never get matched up and my data would either be in Nebraska or Minnesota. I mean Minnesota?? What an insult to my distance/azimuth data! So to mediate this, I went through a series of reprojects until I finally got to the projection that worked for both my data and my imagery. The projection was WGS 84 for the actual distance to line layer. However, I am still a little puzzled because the coordinate system for the data frame is NAD_1983_HARN_WISCRS_ EauClaire_County_Feet. The layer lines up just fine with the imagery but they are both in separate coordinate systems. This issue is not as pressing as it may seem because all I am doing is providing a relative location of these objects. If the situation was different and I needed the exact location of a pipe underground, this method would not work. For that project you would need to use a total station that has survey grade GPS on it that can accurately shoot these objects to within centimeter accuracy. For the sake of this activity though, what we did is good enough and is still considered cost effective through my eyes.

Now that I have everything lined up, I started a second tool. This tool is used to create an output feature class that resembles all my objects as points instead of lines. This tool is called features vertices to points. To run this tool I simply added my bearing distance to line layer as the input and then under one of the parameters, I selected END so that it created a point for each line at the end of the line. This tool worked like a charm with no problems what so ever and the output looked something like Figure 5.
Figure 5: Map showing the result of the Features Vertices to Point tool
Results/Discussion:
Now that I have my final maps (Figure 6) we can see how accurate this method of surveying really is when compared to how cost effective the project was. The actual points on the map where the objects are located are not the most accurate but like I said before, its good enough for what its for. Yes, we could have taken the total station out there and shot all the trees and blocks. However, imagine this,what if I got hired by a company to make a map of where all the Black Walnut trees are on their property so that they can log them off. I get out to the job site and realize that my GPS doesn't work because it cannot get enough signal, so now what? One option is to go back to the company and say that I need a Total Station and they have to fork over the thousands of dollars its going to cost to purchase it or I could survey those trees using the distance and azimuth method since the logger is not going to need centimeter accuracy anyway. All they want to know is where the relative location of these trees are and if I told them that they needed to go buy a Total Station so I could do that, they would go find themselves another Geographer.
Figure 6: Final Product Map showing the objects and the bearing from our control points.
Conclusion:
Throughout this assignment I ran into small issues such as the coordinate systems not lining up or the class as a whole having problems with the final lines because the declination was off. These issues can be solved though by different troubleshooting methods and knowledge of geospatial technology and the thinking that goes behind it. Part of the lessons I was suppose to learn from this activity is that not only do you not always need the absolute best of the best survey equipment, but in many scenarios you may not have a choice and your going to have to go old school and figure how to still accomplish the task at hand.

Metadata:
Created by Nik Anderson
Created at UW-Eau Claire Geography and Anthropology Department
Created for Geospatial Fields Methods
Created on October 2nd 2015
Created with ESRI ArcMap and Microsoft Excel

Monday, September 28, 2015

Visualizing and Refining Terrain Survey

Introduction:
This activity is based off of last weeks creation of a digital elevation surface activity. The reason why we are refining the survey is because we have become privy to new and improved methods and therefore should go back through all the work we did so that we can make a better end product. After last weeks activity we heard from every group and how they went about survey their sandbox and from that discussion we were suppose to refine ours. Our group decided the best way to refine our survey would be to go back and re survey the points on our map that had sharp changes in elevation so that we could get better detailed data about those structures. Once we collected all the data again we would go through the same process as before and make maps in ArcMap as well as exporting them to ArcScene so we can get a 3D representation of the surfaces as well.

Study Area:
We conducted our second study on September 23rd, 2015 from 8:45am to 10:00am. Our study area was a 4x4 area once again located on the point bar underneath the footbridge on our campus here at UW- Eau Claire. We determined this location to be suitable for this activity due to the sand being deposited by the meandering Chippewa River. The sand allowed us to manipulate it into our own rendition of the City of Eau Claire so that we could survey it for this activity and create the required landforms; a ridge, hill, depression, valley, and plain. Since we were trying to create our own rendition of Eau Claire, we thought it would be appropriate to include the following features found inside Eau Claire; the "Campus Hill", Chippewa River,  Half Moon Lake, Carson Park, and the Putnam Park/Third Ward area.

Figure 1: Aerial of Eau Claire showing the location of Campus.

Figure 2: Aerial of Campus showing the study area underneath the footbridge.
Methods:
As stated in the Introduction, we are keeping all of the data previously surveyed and just going back through and resurveying the areas that we thought were of upmost importance. The reasoning behind this is because many groups used a survey spacing that was too large and did not pick up their features accurately. For example, if your group is using 10cm spacing and on of your features is only 10cm across, how can you expect to get any detail out of that data? We determined that the areas of upmost importance were the Chippewa River and Half Moon Lake. When comparing the XY data we had last week to the data collected this week we can see the areas that we focused on by the intensity of data points on Figure 3 below.
Figure 3: Our XY data showing where we collected more data points.
Once we have the XY data displayed in ArcMap we can then start to make our 2D interpolation maps. What these maps do for us is to create a continuous raster surface instead of having to analyze the data from a vector standpoint. There are many different methods of interpolation and they all have their own way of making up values to assign to each cell that is in between the actual data points.  I will be explaining each of these methods along with a 2D and 3D representation of each of these methods from my first survey. There are 5 methods that we had to display and at the end I will pick my favorite method and create another map from our improved survey. Also, something to note is that since we needed to create five maps from one dataset I decided to create a flow model so that I could quickly produce the maps.
Table 1: Data flow model showing the creation of the five interpolation maps

TIN (Triangular Irregular Network)
A TIN uses vector data points and from that data, creates many triangles that are connected at the vertices and are bordered by each others edges. The more points that you have, the more triangles there will be and therefore the more accurate your map will be become. I like TIN because it is easy to understand but the straight edges look unnatural and therefore throw off the realism of the maps(Figure 3).
Figure 3: Side by side comparison of the first 2D map vs. the first 3D map

Kriging
The Kriging interpolation method is a geostatistical method that uses autocorrelation. This method is often used in soil maps and is a multistep process that can only be explained by those of us with a mathematics degree. I like Kriging because it gives a more smooth representation of the surface which to me looks more realistic instead of a bunch of random points sticking out (Figure 4).
Figure 4: Side by side comparison of the first 2D map vs. the first 3D map


Spline
Spline is another method that utilizes a mathematical function. This method is used heavily with gently varying surfaces such as elevation and water table heights. I do not care for spline because at least in my map, it does not give in my opinion a good representation of the features (Figure 5).
Figure 5: Side by side comparison of the first 2D map vs. the first 3D map


Natural Neighbors
This method takes advantage of Thiessen polygons and uses the neighbors of all given points and creates new polygons that are then weighted out and smoothed. I like Natural Neighbors because it gives a nice smooth representation of the model as well as hitting all the appropriate points of elevation change (Figure 6).
Figure 6: Side by side comparison of the first 2D map vs. the first 3D map


IDW (Inverse distance weighted)
IDW uses a linearly weighted set of sample points and assumes that the variable decreases in influence from the sampled location around the data points. I do not like IDW for this data set because its result is way too clustered with point data sticking out and it doesn't do a good job of giving the illusion that there is no points there (Figure 7).
Figure 7: Side by side comparison of the first 2D map vs. the first 3D map


Now that I have produced maps of the different interpolation methods, I can go through and decide which method I believe was the best for our study. All these methods have their own applications but for our study, I like the Kriging method the best. Knowing this I can address the issues my first kriging map had and think about how I can resample so that I get an even better end product for my second map.

To resurvey we basically had the same grid system as before but since we are going to be taking data points in between our former points, we had to renumber the x and y coordinates. Beforehand we had label each point by 1,1 or 1,2 or 1,3. For each point we moved over we just added another number in that direction as spaced out by 5 centimeters. After thinking through how to add numbers between 1,1 and 1,2 we decided to label them by how many centimeters away they were from the axis. So with 5 centimeter spacing, instead of 1,1 and 1,2 we switched them to 5,5 and 5,10. This allowed us to survey points in the middle such as 5, 7. With this new system we were able to get accurate measurements of our most prominent features.

Once we surveyed, we uploaded the excel table to ArcMap and went through the entire process again, except this time we only produced kriging maps.

Results/Discussion:
Looking at the results from the last 3D kriging map (Figure 8), I am content with the content and accuracy of the data. Assuming that we could have resurveyed without any disturbance of our study area from all the rain, we would have ended up with a better product. For any discrepancies that we had I would blame most of them on the site being disturbed by the rain and then some on user error because we are never perfect. As a matter of fact, one point that I know is user error is the point on the map where you can see we had our depression but then in the middle of that depression, there was a big spike, which I associate with a mistake in the data collection process. That is genuine user error and the result of just being sloppy, but it is also a good indicator of if we did that on other points as well, to my knowledge I don't see any other huge spikes like that again, so we must have done quite well.
Figure 8: The second 3D rendition of the kriging interpolation method.

If I had to do this all over again I would first start with a plan to minimalize the amount of time it took to record the data because in all honestly, it was not efficient for my personal sake, since the first collection took so long, it could have been better if we would have just assumed that some of the flat plains were truly flat and just did not measure them since it did not provide any meaningful data. I would also stand my ground with my group when I said that we should put our point of origin in the bottom left, because we didn't all of our resulting maps made it difficult to pick out the features in Eau Claire because the axis was completely flipped horizontally. I was able to remedy this flipping effect in Adobe Photoshop as shown in Figure 9 below.
Figure 9: Final map of Eau Claire!


I will also say that there is not a huge change in our maps because we did such small spacing the first time (5 cm) as compared to other groups (10cm).

Conclusion:
It is truly amazing what we are capable of when technology breaks down and all we have is string and a ruler. By creating a simple grid system we were able to create our own terrain and then duplicate that once we got back to the technology and then create some pretty neat maps from that. The biggest part of this lab and the most exciting was not having someone hold our hand through it, working as a team to finish the task, and being truly proud of the fact that once we created the resulting maps, we could say that we made these from scratch.

Sources: ESRI, "Comparing Interpolation Methods." n.d. Digital file, ArcGIS 10.3.1 Help.

Metadata:
Created by Nik Anderson
Created at UW-Eau Claire Geography and Anthropology Department
Created for Geospatial Fields Methods
Created on September 25th 2015
Created with ESRI ArcMap and ArcScene, Microsoft Excel



Sunday, September 20, 2015

Creation of a Digital Elevation Surface

Introduction:
Our first activity was created so we could demonstrate the necessary requirements to survey a landscape. What we are trying to create in this activity is a Digital Elevation Surface. We were tasked to do this with relatively no instruction at all. All we were given was a 4x4 box that our professor built for us, along with tape, string, tacks, and rulers. It was up to our own group how we wanted to go about creating this surface and where. Once the Digital Elevation Surface is created we can look at it through various programs and then run more analysis on our landscape.

Study Area:
We conducted our study on September 15th, 2015 from 5:45pm to 7:30pm. Our study area was a 4x4 area located on the point bar underneath the footbridge on our campus here at UW- Eau Claire. We determined this location to be suitable for this activity due to the sand being deposited by the meandering Chippewa River. The sand allowed us to manipulate it into our own rendition of the City of Eau Claire so that we could survey it for this activity and create the required landforms; a ridge, hill, depression, valley, and plain. Since we were trying to create our own rendition of Eau Claire, we thought it would be appropriate to include the following features found inside Eau Claire; the "Campus Hill", Chippewa River,  Half Moon Lake, Carson Park, and the Putnam Park/Third Ward area.
Figure 1: Aerial of Eau Claire showing the location of Campus.

Figure 2: Aerial of Campus showing the study area underneath the footbridge.
Methods:
To begin developing the terrain we channeled our former days as sand castle champions and began modeling Eau Claire within our 4x4 box. Using only our hands we formed all the required features and to our astonishment, the sand held together quite well with the aid of some water to make for more well compacted moist sand.

Figure 3: Our landscape with captions over the appropriate landforms.
Next we had to develop a plan for how to survey our model. We decided through discussion that 5cm spacing would allow us to have enough detail necessary to create the Digital Elevation Surface, but also allowing us to complete the data collection in a reasonable amount of time. To make this happen we inserted tacks into the boards on the West and East side of our model, spaced our 5cm apart. This allowed us to take 24 data points along our X axis. We then took string and weaved it in and our of all the tacks to give us a guide when it came time to measure. To give us our 5cm spacing along our Y axis, we used a meter stick that we measured every 5cm along the North and South edges of our model giving us 23 data points. Using the meter stick allowed us to save time by not inserting more tacks and string but rather just quickly moving the meter stick. This ended up being a very efficient system because when it came time for data collection we had one person record measurements in a MacBook pro, while the other two took turns measuring the depth to the nearest millimeter. We simply recorded the depth from the string straight down until the meter stick hit the sand and recorded that depth. The data collection itself took roughly an hour which was good timing because the impending rain was about to wash us out.
Figure 4: Our method for data collection using a meter stick and the recorder using a MacBook Pro.

Figure 5: The meter stick at the angle needed to record the depth of the feature to the nearest millimeter.
Results:
Using our plan for data collection, we were able to look at our model in Microsoft Excel and use an equation to show the actual height of our features. Even though we recorded depth in the field, knowing that height of our box allows us to manipulate the data so that it will eventually show the height of the features on our landscape. We will use that data more in the next activity so for now, all we have is the recorded measures taken from our meter stick. The table below represents what our Excel sheet ended up looking like. Using this data we create more maps in ArcMap and ArcScene to show our landscape using a Digital Elevation Surface.
Table 1: Our data in an Excel sheet.

Discussion:
Since this lab was created so that had to use problem solving skills and critical thinking skills to just collect data, the discussion is mainly based upon our data collection methods and the resulting table. The trouble with this lab is you have to find a happy medium between excellent detail in your model and a realistic amount of time that you want to spend collecting the data. For instance we took data points in our model along a 23x24 point axis which resulted in 552 data points. If we would have went for 1cm spacing instead of 5cm spacing that would have resulted in 12,544 data points and an estimated data collection time of 22 hours. You can see that one is completely unrealistic even though it would provide for the best detail. It will be interesting to see once we create the model in Arc if 5cm spacing was the correct choice.
Figure 6: The group discussion how to go about data collection.
Conclusion:
Next week we will take this data and use it in Arc to create multiple TIN, rasters, interpolation, and others models using the data from this activity. It is hopeful that once we create those models that someone could look at them at discern that the landscapes with the model depict the City of Eau Claire. They then could see how the model was created and realize that this was the result of no more than playing in the sand and accurate data collection taken from an experienced group of Geographers.