DTW Final Ancient Port and Shipwreck Map

 

 

This project was created by Will Richards, Tom Choi, and David Coleman. We are all seniors at Carleton, and we’ve all enjoyed this class. There are two primary elements to the project: a map of the discovered shipwrecks from 0-1500 AD, and a series of historical summaries of nine ancient port cities. Also included are a brief analysis of the dataset and the trends within it along with a description of the processes we made use of in the creation of this project. Enjoy!

David, Tom, and Will

Post 8 – DTW Final Project Update

Excerpt from our timeline of deliverables:

  • By the end of Week 6: Have our data cleaned and uploaded to a MySQL database
  • By the end of Week 7: Have the data connected to the map, with the interface existing, if not polished

 

Progress: What have you done so far, who have you talked to, what have you gathered, and what have you built?

Our first point of action was to clean our dataset. We determined which variables and information we wanted to display and store, and which variables and information  we would throw out.  We then designed a relational database to store our dataset with minimum redundancy.  From there we connected to our MySQL database and created an XML file containing our data in an appropriate structure.  Next, we embedded a Google Maps window into the main page of our website.  Lastly, we created ‘markers’ for our map, data structures which currently contain the geolocation and comments from each ship. These markers are displayed on our map.

 

Problems (and proposed solutions): What issues have you run into?

      • Issues loading in unsupported characters to our xml file
        [
        Fixed by using utf8_encode()]
      • Syntax issues while generating our google maps marker from our XML, but we forced our way through it

Have they forced you to change your initial plan?

Our initial plan is still on track.  We have plenty of time to explore exactly how we want to present our data and site, so none of our plans have changed.

Do you have a proposed solution or do you need help formulating one?

n/a

 

Tools and techniques: What applications/languages/frameworks have you selected and how are you going to implement them?

      • MySQL relational database to store our data (implemented)
      • Generate XML file from MySQL database with php script (implemented)
      • Even with our utf8 encoding, some characters don’t appear on Google maps like we would like them to.  Perhaps we could do more thorough data cleaning to replace troublesome characters in a python script before we ever load those characters into an xml file. If we have problems, we will come to you.

 

Deliverables: An updated timeline of deliverables

  • By the end of Week 6: Have our data cleaned and uploaded to a MySQL database
  • By the end of Week 7: Have the data connected to the map, with the interface existing, if not polished
  • By the end of Week 8: Have performed analysis on our data and begun to incorporate that analysis into our web app in the form of graphics and statistics
  • By the end of Week 9: Have finished both researching and incorporating the featured shipwrecks
  • By the end of Week 10: Have the entire project complete and live

(no change)

Is your project still on track?

Yep!

Post 7 – 3D modeling

Even though it is not especially new software, 3D modeling has always felt somewhat futuristic to me. I’ve really enjoyed  working with SketchUp  and PhotoScan, I think in part because I feel like the work I’m doing is more impressive than it truly is. Regarding the research questions for which 3D modeling and simulation are appropriate, I think that the uses are many and varied.

As mentioned in previous classes, procedural modeling is good for projects of a larger scale, perhaps the modeling of a city. Giving the computer a set of rules or instructions and letting it create the city is certainly more efficient than building all the buildings separately.

Manual modeling has its uses, however. For extremely complicated shapes or buildings that must be done accurately, manual modeling gives the creator the greatest degree of control, a benefit mitigated by the tedious nature of this method.

Scanning can be useful for creating an accurate digital copy of physical objects that may not be easy to model manually. An example of this might be a knife or crown. Those digital copies might then be used by computers comparing thousands of 3D model entries to look for similarities between artifacts.

Photogrammetry (what we did in class the other day) is probably most useful when one is trying to make a photo-realistic 3D model of a person, place, or thing with complex textures. Because the photos are first stitched into a mesh that is then wrapped around the point cloud, an accurate 3D model of the subject is typically easy to obtain. This technique is best on a smaller scale, as many photos of small areas are required to create an accurate model, and doing anything larger than a building would take considerable time.

 

I chose to go more into depth on the 3D Visualization of the Upper Nodena Site, found here. This project aimed to reconstruct the Native American settlement that was on the Hampson family farm hundreds of years ago. It also catalogued all of the artifacts found on the land, creating 3D models of each of those and including them in a database found on the website. The interface is intuitive and very thoroughly themed…

For recording the artifacts, I believe the creators of this project used photogrammetry. The models are realistic and accurate, and I think photogrammetry was the appropriate tool for this task.

For creating the village, I believe manual modeling was used. There is not enough pattern to justify using procedural modeling, nor would there every be a reason to update this model in the future (as is one of the upsides of procedural modeling). The buildings are fairly simple, and could easily have been copied and pasted into their correct locations after creating only one or two prototypes.  You can see from the screenshot below that the buildings used int his model lack the variety that one would expect from buildings in the real world.

In my opinion, that lack of variety does not detract from the model. I personally am glad that the creators did not choose to randomly personalize the residences, as any such personalization would be pure guesswork and therefore detract from the realism of the model.

The only critique I have of this digital humanities project is the font choice. I mean, come on. Papyrus is not a serious font choice. Stop.

Post 6 – Mapping Musser

For my assignment, I chose to map Musser Hall. I lived on 3rd Musser my freshman year, and the building holds many fond memories for me. When I went to search the Archives for pictures of Musser, I came up largely empty handed. None of the photos provided good vantage points from which to map Musser to the photo, and few were in focus. I don’t mean to be a conspiracy theorist, but it certainly wouldn’t surprise me if Carleton skimped on the Musser photo-documentation. The building isn’t known for it’s beauty on either the inside or out. That being the case, I Googled the building and found one great shot from the southeast corner of the building. Because of Musser’s surroundings, the other three corners of the building are either difficult to photograph or obscured. For that reason, I chose to use the same photo for two opposite corners of the building, the SE and the NW corners. It turned out alright. Because of the good quality of the photo that I was working with, both faces of the building came out pretty clearly.
 

 

All in all, I don’t love this method. While I understand how it could greatly improve the realism of a model (especially with purposefully taken photographs), it somehow feels lazy to simply plop a picture over a rudimentary approximation of the building that I’m trying to model.  Just my thoughts.

Post 5 – Group and Project

Our group – Will Richards, Tom Choi, and David Coleman – would like to create an interactive map of the world’s recorded shipwrecks from AD 1-1500. We hope to include information such as the date the ship was wrecked, the date it was discovered, the location of each shipwreck, and the contents of each ship. These parameters might be limited by the robustness of our data, but we will discover soon if that is the case. Our data come from the Digital Atlas of Roman and Medieval Civilizations Scholarly Data Series in the form of the Summary Geodatabase of Shipwrecks AD 1-1500, current as of 2008. There are ~1000 observations in the database, each with most of the variables listed above. Certain variables look more reliably reported than others, but after some data restrictions and cleaning, each observation will be fit for analysis.

That analysis will include details of cargo change over time, geographic regions with greater than average shipwrecks, and periods of time in history with greater numbers of recorded shipwrecks. It is important to note that a crucial limitation of using recorded historical data like this is that we have neither a random sample nor the whole population of shipwrecks. We are working with the shipwrecks whose records and locations were recorded well enough that they were registered in this database, and there are very likely clear biases in the data as a result of that. There will almost certainly be a bias for ships coming from civilizations with better record-keeping (such as the Roman Empire, for example). This bias does not invalidate the inferences we will make with our data, it merely restricts the scope of those inferences. Any claims we make then be about ships similar to those in our dataset – including whatever trends we may end up finding there.

Before we do anything with our data, we first need to clean it to allow us to better categorize it by its cargo.  The cargo column cells have many repeated single words (such as amphoras, silver, swords, ceramic, etc). Other fields have long text values describing the cargo in prose.  We will decide on some number (to be determined) of discrete categories to group our cargo in, and classify text values as an enumerated category. In this way, we will be able to store our data as a relational database.

Once our data is cleaned and categorized, we can decide on which aspects of the data do we want to be able to filter or sort by.  We can then begin integration between our web-map and MySQL database.  Once our web map is complete, we can begin building other aspects of our website, such as a ‘featured wrecks’ page, ‘about us’ page, etc.

 

As far as a timeline of deliverables, our plan is:

  • By the end of Week 6: Have our data cleaned and uploaded to a MySQL database
  • By the end of Week 7: Have the data connected to the map, with the interface existing, if not polished
  • By the end of Week 8: Have performed analysis on our data and begun to incorporate that analysis into our web app in the form of graphics and statistics
  • By the end of Week 9: Have finished both researching and incorporating the featured shipwrecks
  • By the end of Week 10: Have the entire project complete and live

 

This is the link to the project.

This interactive ‘ikiMap” gives a good idea of the project we are planning to attempt. It is an interactive map of the sunken ships of the Great Lakes of North America.

Henceforth, our group tag will be “DTW”

Post 4 – D4748453

phpMyAdmin exploration: 
I was able to locate phpMyAdmin with little difficulty, but once there I was at a bit of a loss as to what to do. I clicked through the tabs – Browse, Structure, SQL, Search, etc. – but nothing jumped out at me as something to do. I have no real idea what SQL is, and everything looked a bit like techno-gibberish. I clicked around the file system on the left sidebar, but only to identify the folders from your WordPress database model. When I went to enter data, I only got errors saying:

“#1064 – You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘2,3,4,5) NOT NULL, `name` INT(2,2,3,3,3) NOT NULL) ENGINE = MyISAM’ at line 1”

I was also unable to import CSV files that I’ve analyzed in the past because of some reason I’m not quite sure of. Another incomprehensible error.

 

I met with more success when trying to install a new plugin and theme. For the plugin, I chose one which will enable sharing of my posts on social media sites like Facebook and Twitter. It was very easy, and the specific plugin that I used was MashShare.

I ended up choosing a theme called Gray Chalk, made by Busy Momi Bee because I liked it. It also was formatted in such a way that I didn’t have to re-add a header link to my About Me page, which would have been the case for some of the other themes.

Database Analysis:
       Prior to this assignment, I had never really given much thought to how data was stored. I’ve taken a good number of courses that require me to work with datasets – Stats 215, Econometrics, Applied Regression Analysis – but what I’ve always uploaded into my statistical analysis software has always been a CSV file (comma separated values), which is a flat database. Now that the question is posed to me, I can certainly see some of the pros and cons of using both styles of database.

       Pros of flat:

  • Simple. For smaller datasets, creating a flat database will take less effort than creating a relational one, with minimal drawbacks.
  • Accessible. Conceptually, a flat database makes sense to anyone, and can easily be drawn on paper.

       Cons of flat:

  • Inefficient for large datasets. Both with respect to separating and viewing your data, a massive dataset is very difficult to work with and manipulate with too many observations or parameters.
  • So oldschool. People have been using flat databases for thousands of years. Get with the program, people

       Pros of relational:

  • Flexible. Because a relational database is inherently separated into many different pieces, data manipulation is very easy.
  • Efficient. Because of its nature, there is less storage of redundant information overall, resulting in a smaller file size than a corresponding flat database.

       Cons of relational:

  • Inefficient for smaller datasets. While the flexibility of relational databases is useful, a dataset with not many observations or parameters would be easier to create and just as useful.
  • Complicated. The concept of a relational database takes some explaining, automatically making it harder to use than a flat database.

 

These pros and cons are somewhat simplified, but I don’t think blog posts are the place to delve deep into the technical differences between the two data storage styles.

Post 3 – Code Woah

To me, the question, ‘Should humanities students should learn to code?’, is phrased as a yes or a no question. With that in mind, my answer is ‘Yes, humanities students should learn to code.’* My personal feelings on the issue don’t come so much from a place of ‘coding is super useful’ (although it is), but rather from a place of ‘everything is worth learning’. I don’t mean to argue that all humanities students should necessarily learn to code. I personally think it is a skill with many uses, but there are many useful things which a humanities student might wish to learn. That being said, to answer ‘No, humanities students should not learn to code’, I would need some sort of strong justification as to why learning coding was an objective negative for humanities students, justification which I believe would be hard to come up with. With that being the case, I fully support humanities students learning to code.

 

Some of the assigned readings touched on this topic. On his website, Matthew Kirschenbaum argues for humanities students learning to code. He seems to view it as a skill that is too useful to ignore; a reasonable opinion as far as I am concerned. In his essay “Hello Worlds (why humanities students should learn to program)”, Kirschenbaum writes:

“Computers should not be black boxes but rather understood as engines for creating powerful and persuasive models of the world around us. The world around us (and inside us) is something we in the humanities have been interested in for a very long time. I believe that, increasingly, an appreciation of how complex ideas can be imagined and expressed as a set of formal procedures — rules, models, algorithms — in the virtual space of a computer will be an essential element of a humanities education. Our students will need to become more at ease reading (and writing) back and forth across the boundaries between natural and artificial languages. Such an education is essential if we are to cultivate critically informed citizens — not just because computers offer new worlds to explore, but because they offer endless vistas in which to see our own world reflected.”

For me, this resonates as the strongest argument Kirschenbaum has access to. I find it very persuasive to represent an education in computers and CS as not something extra, but instead as a skill that will be almost assumed in the future. In that framework, humanities students should certainly learn to code lest they fall behind their peers and lack crucial skills.

In his response essay, Evan Donahue seeks not to refute Kirschenbaum’s point but instead to give it further nuance. To the best of my understanding, Donahue primarily wants to distinguish between the term “programming” and computer science as a discipline/field of study. He argues that “[students] should not let their inability to program prevent them from engaging with the computer sciences”, going on to clarify that he fully supports humanities students engaging with computer sciences. He ends his essay with a line that I felt summed his feelings up appropriately.

“Learn to program whenever it is convenient, but start thinking about the computer sciences as relevant areas of concern right now.”

I have scattered computer science experience. My freshman year of high school, I took a course called “Programming in Java”, but have forgotten everything except the importance of semicolons. The spring term of my sophomore year here at Carleton, I took Intro to CS, a course which familiarizes students with Python. Most recently, I have become proficient in R, a program used in Math 245: Applied Regression Analysis. I have also taken some courses on CodeAcademy, the details of which can be seen on my profile.

These exposures to the computer sciences have left me with a somewhat naive worldview on the topic. I know enough to do certain things, given the correct filetypes and software, but not nearly enough to navigate the digital world in the way that I know some people operate. For me, this lends support to my interpretation of Kirschenbaum’s argument that those who do not learn how to code are disadvantaging themselves. Moving forward into the increasingly digital future, I will make an effort to keep ahead of the technological and digital advancements that would otherwise leave me behind.

Post 2 – Reverse Constructing Bacon

The DH project that I am aiming to reverse engineer is called ‘Six Degrees of Francis Bacon’, and it is a “digital reconstruction of the early modern social network that scholars and students from all over the world will be able to collaboratively expand, revise, curate, and critique.”

While Six Degrees of Francis Bacon is a bit Anglo-centric, it is not difficult to grasp the broader implications of work like this. By mapping out the social networks that members of a past society shared, a more perfect picture of that society may be painted. The same project could be done with members of any sufficiently well-documented civilization.

As far as breaking down the black box of Six Degrees into the basic elements of sources, processes, and presentation, this DH project does not present too difficult a task. The assets the website brings to the table for conveyance are a dataset of persons from history. Each entry contains several parameters, including personal information about the person as well as relationship data between that person and others on the site. This dataset, to the best of my understanding, is continually being updated and crowdsourced – the website accepts (and verifies) submissions of historical persons.

I’m a little bit unclear about the distinction between services/processes and presentation/interface, but Six Degrees offers both a search function and a fairly advanced filter system. It is here that I worry about crossing into the realm of presentation/interface, because these filtration tools are conveniently paired with the graphic, which changes to show only the entries you’ve searched for. Selecting a “dot” of a person on the social network map brings up the corresponding data in a viewing window off to the side of the display, and further tools are available off of that window.

The display, while fully functional, doesn’t jump out at me as something done especially well on this project. There are no problems that I can identify, but there are perhaps too few data points to justify the method of display that was used. The hundred or so displayed dots simply look too few. Some of the filtering options are also awkward in that to search for a group, you must know the complete and accurate tag of that group, not just some relevant part.

Post 1 – Building a Home

This is my blog post about how awesome and easy building a house in SketchUp was.

David Coleman ’17

davids home pic 1

The very first thing I did was to go to Google Earth and look at the birds-eye view of my house. I wanted to start with the correct outline and work backwards from there, because my digital sense of scale could use improvement. From there, I built two floors and the more detailed perimeter, then did the front of the house and doors. I basically threw windows on whenever I felt like it – there were a lot of them. I did my best to keep them in line with the ones on adjacent walls if appropriate. (Google.com)

davids home pic 2

I spent quite a while trying to make the second story more accurate by chopping triangles off the sides of the second story walls to angle the walls, and I am happy with the result. The windows are a rough approximation of what truly exists, with some areas exhibiting more accuracy than others.

davids home pic 3

One tip that I feel helpful enough to merit sharing is this: Use a real mouse, not a trackpad. I’m at least twice as fast when working with a real mouse, and it’s much more fun to make fast progress.

I had a few difficulties. One of them was a tendency to inadvertently delete or drag around the opposite walls of my house while making changes to the near walls. I did that several times, and it set me back quite a bit. I also just had general difficulty with certain tasks, like getting my camera oriented at the part of the building that I wanted to look at, or trying to line things up enough to satisfy my obsessive tendencies.

davids home pic 4

I think I especially enjoyed this assignment because it made me remember this home, which is where I lived from birth to age 19. My parents moved after my freshman year at Carleton, but I still have strong emotional connections to the place. Being able to replicate it so easily in software was eye-opening and enjoyable.

bold
italics

blockquote

strikethrough

  • thing one
  • thing two
  • thing three
  1. thing two
  2. thing three
  3. thing one
css.php