ArcGIS Online – Organizing and Expressing Data Tutorial

While I was making my Japanese Mascot Map (which you can see here!) a lot of my time was spent experimenting with how to organize my data and which data ArcGIS Online could pull directly from the resulting spreadsheets. I explained some of my process HERE but I want to explain in greater depth how I organized, input, and expressed my data.

I used GoogleSheets to store and organize my data which worked well. However, ArcGIS Online can only upload spreadsheet files in .csv or .txt format. GoogleSheets lets you download individual sheets in .csv format, but that means if you edit your data than you have to delete the layer made with the unedited file, download the edited file to your computer, and render the edited file to your map as a new layer. This takes a lot of time. I’m hoping that with this tutorial the time I spend editing/downloading/uploading could save some time with your own project

~~~

What type of location data do you want to map? This will affect how many spreadsheets you make and how you express location within them. My rule of thumb is make a new datasheet for a new layer if location of places/regions will be rendered to the map by  different parameters. In other words, locations denoted by City, State vs. Street Address vs. Latitude/Longitude should be organized in their own spreadsheets. This is helpful for a couple of reasons:

  1. Confuses ArcGIS less – ArcGIS asks you when you import a layer which columns it should pull location data. With multiple datasheets/layers, you can choose which data are rendered in which manner without having to sacrifice accuracy, arbitrarily pick drop points for larger regions, or confuse the program with void entries.
  2. As data sets get larger, problems are easier to find – A number of my spellings didn’t match those ArcGIS used and some of my photo links were broken. It was much easier to delete one layer, find the error, fix it, and re-upload a smaller spreadsheet than it was to do the same for a spreadsheet with 50+ entries
  3. Easier to stylize different categories of data – Not to mention you only have to re-stylize some of your data if you have to fix a layer. Whenever you (re)upload a layer, the points are set as red dots by default.

In my case, I wanted to map mascots from Prefectures, Cities, Buildings, Organizations, and Companies. I used two different methods for designating location so I made two map layers from two datasheets.

Prefectures and Cities  I mapped as points denoted by two columns of data: Prefecture, City (ie. Hokkaido, Hakodatte).  Because prefectures aren’t associated with any one city, I used the same format with the capital as my city marker (ie. Kobe, Hyogo). It would be the same as making one column each for State and City if you were mapping in the United States.

NOTE!: If you want to make polygons for prefectures/states and not points, I would suggest making a separate spreadsheet for them. I did not use polygons in my original map, but if you include a city than ArcGIS will pin that city instead of denoting a region.

Buildings, Organizations, and Companies I mapped using Latitude and Longitude. These are things with definite locations, usually denoted by street addresses. However, street addresses are often very different across countries, and that’s before differing spelling conventions for foreign languages. Even in familiar areas, points sometimes don’t get dropped in the right place. The easiest way to get an accurate point the first time is to use its latitude and longitude.

An easy way to find it is to use GoogleMaps. To do so:

1.) Search Location

2.) Right click on the point and choose “What’s here?”

A small bar should show up at the bottom of the screen. See the numbers at the very bottom?

The first number is the Latitude. The second number is the Longitude.

In my spreadsheet, I made 4 columns for location data: Prefecture, City, Latitude, Longitude. I included the Prefecture and the City because I wanted to display this information at each point, but when I uploaded the layer the program used the latitude and longitude to drop the pins. A window may pop up asking you to specify which data you’d like to use for locations. In that case, pick your preference.

NOTE!: ArcGIS sometimes gives you the option to limit your expressed dataset to a single country, in my case, Japan. If your data set reaches across countries, include a Country column in each spreadsheet. So, in the first example, the location of a city would be expressed in 3 columns: Japan, Hokkaido, Hakodatte. The location of a building would be expressed in 5: Japan, Kyoto (prefecture), Kyoto (city), 34.987756, 135.759333.

You should include a column for any categories you want to distinguish stylistically.

In my spreadsheets I added a Mascot_Type column. This I kept close to my other non-location data: Name and Name of Building/Company/Organization.

From that data, I could set the layer to display points based on what type mascot the point represents. When you upload a new layer, a menu called “Change Style” will appear on the left. In the drop down menu under “Choose an attribute to show,” pick which column you put your categories.

You can then change how each category appears on the map by changing the appearance of the point. Click on one of the sample points in the “Change Style” menu. A window will pop up with point style options for the category you selected. When you are done, press “OK” both in the window and the “Change Style” menu.

If you want an image to pop up when you click on a point ArcGIS Online can pull images straight from your spreadsheets. In a column titled “Image” or “URL”, add URL links to images you want to use for each location. Here is the image of Tawawachan I used and the corresponding link highlighted in yellow. Because URLs are long, I recommend making this the last column in your datasheet.

To add these images to your pop-ups, make sure you pressed “Ok” on any open menus and click the “…” next to the layer you want to add images to. From that menu, click “Configure Pop-up.”

The menu will open on the left hand side. Here you can change which data is expressed where. To add images, go to  “Pop-up Media” and press “Add.” From that drop down menu, select “Image.”

A window called “Configure Image” will appear. Here you can add titles, captions, and hyperlinks to your images. To add the images from your spreadsheet, go down to “URL” and press the small boxed cross to the right. Scroll down and select the name of the column where you put your image URLs. I called mine “Image.”

Press “Ok” in both the “Configure Image” window and “Configure Pop-up” menu. Once you do, an image should appear in your pop-ups when you click on a point. If you don’t see it immediately, scroll down or enlarge the pop-up window as they are quite small. If you still don’t see an image, the URL in your spreadsheet may be broken.

NOTE!: Make sure to double check the links aren’t broken while you’re still working in your spreadsheet. When you render your data onto the map, the program won’t tell you if it can’t find images. It’s better to check before you render your data instead of after you’ve spent time stylizing your points because you will have to re-upload the sheet, setting the points back to default red circles.

Team Mudd Action Items Update

In class on Thursday we investigated methods that we could use to make photo globes throughout Mudd, as well as presentation methods for the final product. Since we’ve all done work over the weekend, here is a general update on progress:

Martin:

  • Worked on the 3D model of Mudd’s exterior
  • Next step: Complete and add textures

Brittany: 

  • Photographed MANY rocks from the Geology department
  • Rendered photographs of 4 rocks into 3D models (because rendering </3)

COOL ACTION SHOTS!

LINKS TO ROCKS!

Lydia:

  • Collected room numbers and colloquial names for 1st floor Mudd
  • Solicited help of friendly Chemistry major for a tour of 1st and 2nd floors on Wednesday (2nd floor locked from stairwell 5pm-8am weekdays and all day weekends. Labs locked except for classes)
  • Pared down form responses to those that can grammatically/contextually stand alone
  • Loosely prioritized locations for photo globes into 4 categories (High Priority, Mid Priority, Low Priority, Priority?)
  • Next Step: Go to second floor during normal school hours with friendly Chem major and complete directory list. Annotate floor maps with prioritized shot list.

Preservation of Mudd Final Project

Brittany, Martin, and I will work to preserve and recreate the cultural history of Mudd Hall of Science before it is torn down this summer. We’re hoping to pull from our experience on the 3D Boston Massacre Project and use the Unity Engine to create a virtual narrative of Mudd with interaction points within the model that provide more detailed information. The recreation will potentially consist of:

  1. 3D recreation of Mudd either using CAD files borrowed from facilities or built by hand in Sketchup from photos and floor plans.
  2. Photographs from the digital archives that show how Mudd was used in years and how it has changed, and current photos of the unique aspects of the building today.
  3. An AV component in the style of Ken Burns which will include some of the photographs mentioned above, and potentially testimonials from those who use Mudd
  4. A guided tour through the building scripted in Twine

We can easily store all the components (images, audio, models, Unity scenes, etc.) in a simple database such as Google Drive and then publish our project via Unity Web Client.

The first step for us is to try and get a hold of the CAD files from facilities and to start looking for photographs in the Digital Archives. If we can’t get the CAD, then we will need to compile the references needed to build Mudd ourselves.

Sketchup Crashed

Unfortunately, I don’t have a model to show today. The same problem I had in class where I wasn’t able to edit my photo match layers kept creeping up and was exacerbated by my POV starting below my model every time I tried to align a photo to a building face, despite starting from roughly the perspective of the photo. I tried five or six times with various Carleton buildings until SketchUp crashed when I called it a night, having not really started.

In lieu of a model, below are some notes and thoughts I had while I was trying to get SketchUp to cooperate.

The photos that look the nicest are often not the most useful and the photos that would be the most useful for hand building are also often not the most useful for layering onto models. I found a fantastic photo of Skinner Memorial Chapel in profile with NO TREES but then realized profile photos would not work with the sight-lines. Unhelpful photos I also found (and which were prolific on Carleton’s website and google images) are:

  • fisheye lens and other distortions
  • cropped photos
  • photos angled so they show more of the environment where the building is than the building itself
  • TREES

Satilite imprints are not prefect bird’s eye views and the farther a footprint is from a basic rectangle, the harder it is to trace from a satellite view. I attempted to do the Weitz building and found that because the image was from above and slightly to the left it was hard to tell what rectangles were a small wing of the Weitz or something built on the roof. Differences in height also complicated drawing the footprint because if the photo wasn’t taken directly above the building, a taller portion could obscure parts of the rest of the building.

Weather and lighting matter because it changes the colors of the buildings significantly. That said, I think it would be cool to have two sides of the same building be winter and the other two sides spring as a way to show how the building looked and how the surrounding space is used over the course of a year.

I think in the future I will stick to hand building but I definitely will make use of the georeferencing to make a starting footprint because having an idea of the basic shape to start feels better than drawing a rectangle in the dark. I think I would try to figure out how to extract textures and colors of buildings from photographs but not the faces and features themselves. I will also think more about what time of year I want to model my building in when I’m looking for photo references.

Maps! Mascots! oh MY!

When I was studying abroad in Japan last term, I quickly discovered how much Japan loved its mascots. I knew this beforehand, but I had yet to grasp just HOW MUCH Japan loved its mascots. Called “yuru-chara” (but pronounced closer to “yuru-kyara” in Japanese), these mascots represent and promote prefectures, cities, wards, companies, organizations, sports teams, events, YOU NAME IT. There’s even an annual Yuru-Chara Grand Prix!

Funassyi jumping and giggling on Japanese TV set GIF

Since most mascots are tied to regions and cities, I decided to try my hand at mapping them out onto Japan itself. In an ideal world, I would have the prefecture mascots linked to polygons of each prefecture with points for cities, organizational headquarters, buildings, etc., but I decided to start simple and work on a point-style map on ArcGIS. We also spent a little more time stylizing the points in ArcGIS than we did Corta so for a first draft of this map ArcGIS felt more comfortable.

I spent the most time trying to find the mascots in the first place (there’s no definitive list out there so I had to dig deep into my memory and do a LOT of google searches) and then organizing the information into a google sheet that could then be translated into layers the program could understand. Sparing you my trials and errors, I decided on two differently styles sheets:

Prefectures & Cities
– (Mascot) Name
– Mascot_Type
– Prefecture
– City
– Image URL

Buildings/Organizations/Companies
– (Mascot) Name
– Mascot_Type
– Prefecture
– City
– Building/Organization/Company (Name)
– Latitude
– Longitude
– Image URL

City Mascots were pretty easy to drop points on, but prefecture mascots I linked to the prefecture’s capitals to avoid being too vague. On the other hand, I dropped points for building mascots etc. via Latitude and Longitude. Because the Japanese and American address system is so different, I figured that would be the easiest way to get an accurate pin on the map.

By using two separate sheets (and therefore two separate layers on the map) I could indicate the location method separately, and make further edits and additions to either spreadsheet easier and faster to upload. There were some locations from the Prefectures/Cities sheet that couldn’t be found after rendering, which I think was caused by alternate spellings (the bane of my existence using map apps in Japan). I wish ArcGIS would flag those error-ed entries but it doesn’t so until I can go through one by one and see which are missing there will be a few missing points in my map.

Please check out my map on my website!  I recommend playing around on the prefecture/cities scale before zooming into Kyoto and Tokyo. Try and find my favorite mascot, Tawawachan!

This tryptic of Tawawachan postcards is above my bed.

You can tell I REALLY like Tawawachan

Databases (or: Ramblings and Questions about Categorization)

(FAIR WARNING: rambling ahead. I couldn’t put my thoughts and questions into words that made sense so please, bear with me. It’s Monday.)

Databases boggle my mind – I honestly can’t tell if it’s because of the great volume of information that we’ve managed to store in databases, or because of the care in categorization that databases require.

Making categories seems to be the biggest issue as we saw with the wide variance of tags in the Carleton history timeline we made in class. Each event clearly holds meaning to us, but how to express that meaning (Academics? Academia?) and how specific that meaning should be represented are important and difficult questions. Somewhere in our reading from coding week, the study of code as a linguistic signature of individuals came up. I am sure the same could be done for databases because as we saw last week, everyone had a different way of organizing and presenting data based on what patterns they saw and what they thought were important to emphasize.

There’s also the question, I think, about what information is important now and what could be important in the future, especially as it pertains to crowd sourced big data projects like ANZACs, When ANZACs first started, they could have kept the crowd sourcing to individuals heights, but they also crowd source names, regiment numbers, causes of death, etc. I wonder how they store that data and how will it be made accessible in the future?

(side note: will there be a generational difference in database-makers who will only use # to denote qualitative categories? Does anyone else remember when # was called the “pound sign”?)

I’m very curious about what the process would be if we wanted to combine the information held in databases. For example: say that the researchers behind ANZACs found a group in the US  and a group in Europe who did a similar crowd sourced project which took more or less the same data from similar sources from a similar time period and wanted to pool their data. Flat data sets would likely be tedious to merge with a different organizational system, but how much would you have to backtrack through a relational database to get at the keys that equate Location1=New York  and AuthorID1=Mark Twain and change them so the two databases can talk to each other without redundancy? By this I mean if Location1=New York in one data set and Location1=Auckland in another, that would lead to problems. Once the data is large enough that doing it manually is impractical, how does one untangle the categories? Is this a problem digital humanists have encountered yet?

var myCodingExperience = “Ok”;

I went into Hacking the Humanities apprehensive about coding. I do not think I would ever be able to achieve proficiency in a coding language, nor do I think that coding should be a prerequisite to working in the digital humanities, nor a requirement for all students of the humanities. Is it helpful? Of course. It’s helpful because understanding a language or a field even at the most rudimentary level can facilitate conversation and collaboration without getting lost in translation. But can that collaboration happen only once everyone knows code? I don’t think so.

My brain has the tendency to switch numbers and letters, and I am a terrible speller. Although this makes looking for books by call numbers a miserable experience, it is harder for me to notice a misspelled variable or a rogue 7 in a page of code than it is to realize I’m in the wrong section of the library. I also find that I do not think as logically, linearly, nor literally as a computer does. I am much more visual in how I understand things. I was very glad that Codecademy’s practice pages had different colors for different parts of code, because it gave me a way to see what I was doing in a way I could understand easier.

For that reason, HTML and CSS was easier for me than JavaScript because almost everything you input into the computer is visible on the finished product. Imputing color: Magenta under the wrong heading is a mistake that gives the same effect as standing in the Middle Eastern instead of Chinese history section: I can see where I went wrong. JavaScript, on the other hand, absolutely everything is under the hood and the inputs are much more math and logic based. If my syntax didn’t go through, my first question was always “What did I do wrong?!” It was hard to dial back my frustration and go through line by line, only to add an extra parenthesis.

As someone who is neither inclined to code, nor feels they are capable of coding beyond, perhaps, changing colors and font sizes, I feel a little betrayed because Donahue’s opening to his response article is:

“Let me start by saying that, despite my title, I am 100% in favor of everyone learning to program and I agree more or less with everything Matthew Kirschenbaum says in his essay ‘Hello Worlds (why humanities students should learn to program).’ I chose this title not to argue against anything Kirschenbaum says, but rather to suggest that the manner in which he says it may be misleading if it is taken at its face value.”

Donahue’s article cannot be taken as an argument against humanities students learning to code, rather it is an amendment to the statement that all humanities students should code (before they can engage with the computer sciences) and an attempt to show some of the similarities between humanities and computer science projects.

I do not doubt in the slightest that code can be as beautiful in its mastery and efficiency to those who read use it as I think the 6-word opening sentence of Fahreinheight-451 (“It was a pleasure to burn”) is when I read it. In that sense, I do not disagree with, nor do I want to quash Kirschenbaum’s belief that those who study the humanities have more in common with the programmers across campus than we think. That said, I’m hesitant about making it a requirement for all humanities students, or all those who want to collaborate on digital humanities projects. Digital Humanities is a deeply collaborative field, and I worry that coding proficiency could exclude students and scholars who stand firmly in the realm of the humanities.

For those who want to actively engage and work with the computer sciences on projects, I think it is worth knowing some coding, or at the very least understand the limitations of programs (which is why I wanted to take this course despite being scared of code). Perhaps it would prevent misunderstanding during longer projects, because knowing what is possible is the first step of making anything.

The Knitting Reference Library (KRL)

I decided to poke around the Knitting Reference Library created on December 23, 2015 by the Library Digitisation Unit of the University of Southampton.  It’s a digitized and cataloged collection of books, catalogs, journals, and magazines about knitting, as well as knitting patterns. The books, patterns, and journals are available on site at the Southampton Library and the patterns accessible online are a selection of those they own that allowed copyright clearance. They currently have works representing a publication range from 1840 to 2012, and aim to continue adding new materials that “reflect the revival of interest in the many aspects of this art, craft and fashion.”

The sources themselves you browse through the “Collection” tab next to “About”. At the moment, there are two overarching collections (“Knitting patterns” and “Victorian knitting Manuals”) made up of the 307 items listed as “Texts”.  There is a search bar and various filters you can apply to the larger collection, similar to that of a library catalog’s search UI. You can display your results either as thumbnails or a single list.

I decided to filter by “patterns” and search for a “hat”. My search resulted in a Men’s football 4 ply cardigan pattern from the 1950’s by Mary Maxim, not quite my goal. The search function latched on to the fact that in the description there is a section for “Props used in illustration” which included a hat. I tried again with “waistcoat” because I saw it in many titles while I was browsing before, and found 12 results that were more reasonable. The same was true for “cardigan”.

Because there’s no filter for type of garment, I think knitters could use these patterns but perhaps wouldn’t come specifically to this collection to find the cardigan pattern of their dreams. The site encourages feedback on each entry through a reviews section and ability to “favorite”. How many reviews or “favorites” an entry gets is displayed along with the thumbnail on the main collection page.  Entries can also be freely downloaded in different formats (PDF, kindle, torrent, etc.) which further encourages practical use. I think the presentation is such that it would best be used by first browsing the whole collection and then picking those books and patterns you wish to download and use offline. Academics, on the other hand, can easily filter by date published or by creator, depending on their research goals.

Building My House

One premise to get out of the way immediately: my house is much more complicated than a dog house. It’s an old blue victorian-style house so my room growing up was in a turret. The victorian style also means my house has a lot of molding, shutters, windows that jut out from the roof, overhangs, etc.

   For the most part, I ignored the small details except for the overhangs in the front of the house so I could focus on the overall shape of my house. The proportioning was the hardest part initially because my house underwent a couple renovations so the kitchen and the family room jut out the back.

The most exploration I did in SketchUp was with the different shape tools. The major shapes in my house are rectangles and triangles, as you’d expect, but my kitchen is an octagon cut in half at the midpoint of a side and the top of the turrets are cones. Another complication of the turrets: the one on the left of my house looks more like a cylinder shoved into the corner of a rectangle, and the one on the right a half-circle glued to the side of a rectangle

     To make an octagon I used the pentagon tool to embed a pentagon into the back of my house and then drew a line across the point to make an extra face. I had to zoom into my model so that my line tool wouldn’t automatically snap to the midpoint because drawing midpoint to midpoint left a face too small for the kitchen windows.The cones I had to google to figure out how to make. In summary:

1.) Draw a circle

2.) Use Push/Pull to pull the circle into a cylinder

3.) Use the Move tool to move the outer edge of the cylinder into the midpoint. You may have to fiddle with which point along the edge will fold the cylinder neatly into a cone

I ran into some problems because Sketchup doesn’t register two planes of the same size and shape on top of each other. This meant I had to make the cone slightly larger than my cylinder, delete the original cylinder so I could pull the roof up, then redraw and pull the turret down from under the roof. Looking back, I think I could have drawn the cone to the side then moved it onto the turret but since I was going by proportions and not exact measurements I tried to build directly onto the model if I could.

You can see from the picture that my porch is still somewhat solid. I couldn’t figure out how to draw on curved surfaces to then push/pull or cut out planes on my model.  This also prevented me from drawing the white molding that runs along the bottom quarter of my house. The line tool only works with flat surfaces and using the half-circle tool doesn’t place the line flush against the curved surface. Also, the push/pull tool sometimes wouldn’t let me pull in both directions, only one. I’m not entirely sure why that is the case, but I was usually able to work around it. Though those work-arounds, I was able to draw some of the overhang off the edge of the roof. I had to zoom under my roof to draw a new plane and pull it out from the building but It was well worth the effort.