Team Alumni Map


Just to recap, our project entailed taking Zoobook data converting it to text. The text was then used to map the data Points on ArchGIS. This plan was simpler said than done as we encountered a host of problems and roadblocks along the way. Read all about it here

To-do List: Team Alumni

For the next week, we are planning on working hard on cleaning our data to the point that it is in a usable format for the mapping program. As the association, we had planned to make between Alumni location and Major ended up not panning out, we are working on achieving a replacement data set. For the new data, we are contacting the Archives and Athletic departments asking for old sports team rosters.

By next weekend we should be able to input our data into the mapping software and examine the results.

Project Mudd Update

Hello from Project Mudd! Things have been happening, mostly behind the scenes, but we are excited to show a small snippet of what we’ve found so far. Our project will be more of a digital exhibition focusing around these questions: What is Mudd? What’s in Mudd? What does Mudd mean?

We emailed Facilities to try and get more information about the architecture of Mudd. Unfortunately, we do not have access to cad files of the building, but fortunately, we do now have access to some higher resolution pdfs

floor plan of Mudd Hall

We would also like to explore some of the collections within Mudd and digitize some for online viewing. By combining testimonials with architecture with material culture, it’s our hope that we can create an accurate snapshot of what life on Carleton’s campus was like when Mudd still existed.

We’ve also begun gathering the opinions and testimonials through surveys and emails to various students, departments, and faculties.

And lastly, thinking about our final presentation and what form it will take, we’ve considered making the model explorable online using Unity’s Web plugins, creating a space where users can interact with 3D models of some of the items collected in Mudd, and another section that holds testimonials about Mudd, its culture, and its impact on the lives of the people who have worked and played there.

Update On Group Project

So far, we’ve reached out to both the Carleton archives and ResLife concerning what data they might be able to offer us. We’re still waiting for an answer from ResLife, but the archives responded and unfortunately don’t have any information about room draw numbers for past years. They have confirmed that they have kept directory information in print for a certain number of years, which we would have to then transcribe into datasets.

As of now, this process is expected to be quite time consuming, which is forcing us to most likely reduce our sample size from an expected ~100 year period with yearly intervals to possibly four years or more. Analysis of room draw priorities might have to be dropped from the project because of our lack of data.

For now, everything else in the proposal (found here ) seems feasible and is on track for completion. We’ve begun our use of ArcGIS and will most likely start with SketchUp this coming week for the building models.

In the meantime we have mapped the distribution of students for this year as an example, limited to most of the residence halls (excluding town houses and northfield options).  Map.

Post 8 – DTW Final Project Update

Excerpt from our timeline of deliverables:

  • By the end of Week 6: Have our data cleaned and uploaded to a MySQL database
  • By the end of Week 7: Have the data connected to the map, with the interface existing, if not polished

 

Progress: What have you done so far, who have you talked to, what have you gathered, and what have you built?

Our first point of action was to clean our dataset. We determined which variables and information we wanted to display and store, and which variables and information  we would throw out.  We then designed a relational database to store our dataset with minimum redundancy.  From there we connected to our MySQL database and created an XML file containing our data in an appropriate structure.  Next, we embedded a Google Maps window into the main page of our website.  Lastly, we created ‘markers’ for our map, data structures which currently contain the geolocation and comments from each ship. These markers are displayed on our map.

 

Problems (and proposed solutions): What issues have you run into?

      • Issues loading in unsupported characters to our xml file
        [
        Fixed by using utf8_encode()]
      • Syntax issues while generating our google maps marker from our XML, but we forced our way through it

Have they forced you to change your initial plan?

Our initial plan is still on track.  We have plenty of time to explore exactly how we want to present our data and site, so none of our plans have changed.

Do you have a proposed solution or do you need help formulating one?

n/a

 

Tools and techniques: What applications/languages/frameworks have you selected and how are you going to implement them?

      • MySQL relational database to store our data (implemented)
      • Generate XML file from MySQL database with php script (implemented)
      • Even with our utf8 encoding, some characters don’t appear on Google maps like we would like them to.  Perhaps we could do more thorough data cleaning to replace troublesome characters in a python script before we ever load those characters into an xml file. If we have problems, we will come to you.

 

Deliverables: An updated timeline of deliverables

  • By the end of Week 6: Have our data cleaned and uploaded to a MySQL database
  • By the end of Week 7: Have the data connected to the map, with the interface existing, if not polished
  • By the end of Week 8: Have performed analysis on our data and begun to incorporate that analysis into our web app in the form of graphics and statistics
  • By the end of Week 9: Have finished both researching and incorporating the featured shipwrecks
  • By the end of Week 10: Have the entire project complete and live

(no change)

Is your project still on track?

Yep!

Final Project Week #7 Updates

Group Members: Shatian Wang and Melanie Xu

We have processed our data and coded them into personal profiles of individuals, and compared and contrasted them for future use. At hand we have 11 interviewees’ profile and we are in the process of obtaining written consent from a few of them (others have agreed to public use of their material). But in the event of our inability to obtain such consent, we are prepared to only display the profiles of 8 interviewees who agreed to public presentation of their stories. In all, we have built our database of people, places, photos, and city histories/background.

Our original idea of tracing each individual’s migration pattern with an interactive and individualized map was met with some resistance as we could not find the map model that best suits our needs. So we’ve decided to use instead the storymap template on ArcGIS that would similarly present the stories in a less nuanced and complicated way.

Now that we have gathered all the content we will be presenting in our final project, we need to find a web framework that we can use to display the content in an interactive way. We decided to use the the Cascade template in ESRI’s StoryMaps. We will first use the online ArcGIS to create a map that depicts each woman’s migration path, and we will then import these maps to the Cascade template with the narratives.

We have completed the week 6 and 7 objectives in our project timeline, and following our timetable, what we need to get done in week 8 is to create a story map for at least one woman using arcGIS and the cascade template. After that we want to build a web page that tells the background of our project and links to the story maps of all the women.

Update #2 Carleton Alumni Map

Progress
For this week’s part of the project, our plan was to gain access to the data that we will use in the following weeks to create our Map. The first step of the data gathering process was to decide and narrow down what type of data we were hoping to display. After some discussion, we settled on gathering information on the following variables: Name, Year, Location (State, City, Town), High School, Major, and Gender.
We began our search for available data on the internet. After some searching, we found the Alumni Database, giving us access to Carleton majors for the last hundred years; however, no locational information is displayed. As the locational information is a key component of our project, we needed to find another supplementary source to support this information. Finally, PDF of old zoo-books were the key. The zoo-books display all the other information that was not available in the directory.
Our plan is to compile the two sources together into one dataset by associating names of the individuals from both sites.
We have also sent emails to Alumni Relations, Carleton Archives, and Admissions. In the emails we asked for any available data that may have already been compiled, pertaining to or associated with the subject we are examining.
Problems
At the moment we are running into two large problems each related to the two respective sources we are planning to use for our data.
i) Alumni Directory:
While all the information here is easily available in digitized form, it is not all located in one condensed location, meaning that to go through and transcribe all the information by hand would be overly time-consuming.
ii) ZooBook
The problem with the Zoobook is the direct inverse of the one we had with the Directory: while all our information is in one place, it is located in separate unusable PDF format.
Tools and Techniques
We are planning on using a PDF totext converter to make the Zoobooks usable. After we have the information in text format, we are going to use a self-created Python code to reformat and import the massive text blocks (created by the converter) into an Excel spreadsheet.
As for the Alumni Directory, we are hoping to gain the raw data via Alumni-Relations. If Alumni-Relations proves fruitless, then we may have to turn towards Data scraping.
Deliverables
We are expecting to be a little behind. However, as long as we get all our data over the course of the week, then we should be back on track for data scrubbing and formatting next weekend.

Preservation of Mudd Final Project

Brittany, Martin, and I will work to preserve and recreate the cultural history of Mudd Hall of Science before it is torn down this summer. We’re hoping to pull from our experience on the 3D Boston Massacre Project and use the Unity Engine to create a virtual narrative of Mudd with interaction points within the model that provide more detailed information. The recreation will potentially consist of:

  1. 3D recreation of Mudd either using CAD files borrowed from facilities or built by hand in Sketchup from photos and floor plans.
  2. Photographs from the digital archives that show how Mudd was used in years and how it has changed, and current photos of the unique aspects of the building today.
  3. An AV component in the style of Ken Burns which will include some of the photographs mentioned above, and potentially testimonials from those who use Mudd
  4. A guided tour through the building scripted in Twine

We can easily store all the components (images, audio, models, Unity scenes, etc.) in a simple database such as Google Drive and then publish our project via Unity Web Client.

The first step for us is to try and get a hold of the CAD files from facilities and to start looking for photographs in the Digital Archives. If we can’t get the CAD, then we will need to compile the references needed to build Mudd ourselves.

css.php