If you wanna see a video tutorial on how to use LucidCharts CLICK HERE!
Visualizing Data
This week we focused on the visualization of data (to be fair past weeks have been focused on visualization of data as well), especially in static picture summaries of data. In class we discussed the elements and qualities of good data visualization and how to display different types of data using different techniques and methods. We discussed what made the Minard Map arguably the best visual representation of data created. As Edward Tufte, the godfather of data visualization stated
“It may well be the best statistical graphic ever drawn.”
The main reason why the Minard Map is considered the be the best visual representation of data ever created is because it intuitively shows many elements of a data set in a concentrated format. The most important part is that the map is very intuitive and easy to understand. All it takes is one look at the map and a brief read of the explanation of it to fully understand all the information presented. It also gives a good summarized view of Napoleon’s campaign into Russia and shows the key data of his campaign. This map utilizes a great and unique format to show data in an easy to understand and intuitive way, by using a linear approach to the map it can be clearly be seen that it is moving through time. Overall it is a great example of data visualization because the map fits a lot of important information into a very intuitive easy to understand format.
Since we touched on the topic of the GREATEST statistical graph ever created, it would only be fair if we talked about the other end of the spectrum, the WORST statistical graph ever created. As eloquently stated by Mr Tufte,
“This may well be the worst graphic ever to find its way into print.”
This graph literally has so many unnecessary details for not a lot of substantial information. The graph only has 5 unique data points and has unnecessary use of 3D shapes and 4 different types of colors.
At first glance it looks like the graph is trying to show 4 different data sets with one on top of the other, but upon closer examination you see that the colors are supposed to create a 3D effect in the graph. The worst thing about the 3D aspect is that it serves no purpose but to look “pretty” and truthfully, it doesn’t even serve that purpose. The 3D aspect adds nothing to the further understanding of what the graph is trying to show and is there as fluff.
The the top portion of the graph is a mirror image of the bottom portion of the graph, I can’t offer an explanation as to why they decided to mirror the already confusing enough graph. Mirroring the graph in this context serves no purpose because you already have the necessary information shown on the bottom of the graph, so there is no point other than to fill the space for them to mirror the graph.
Overall this graph is super confusing and not intuitive at all. It took me around 5 minutes just staring at this graph to comprehend what they were doing, but I still don’t understand what was going on through their minds when they decided to create this “masterpiece”.
To Do list for 9th Week Group Project
To Do List
- Finish Compiling all the data into a workable format in excel
- Finish Coding the program to clean up the data for us (Maybe, we could always clean it up by hand)
- Plot all the data points onto ArcGIS.
- Create a timeline function in ArcGIS to show the progression of data points throughout time.
- Create an interactive map that will show the change in location of varsity athletes throughout time.
- Get the information for past information for Varsity Athletes.
- Meet up with group members to discuss our findings, and maybe find some World Events that might’ve changed the data points that we have gathered.
Group Project Week #2 UPDATE!
The post was published on my blog which can be found here!
Models of the Triple D Kind
This week we tackled the big field of 3D modeling. 3D models are becoming more and more prevalent in today’s computerized society. 3D models can tackle many different varieties of research questions, ranging from preserving an archeological site to city planning. 3D modeling is most useful when it is applied to projects that require the use of 3D space in order to fully visualize the solution. You can use 3D modeling to tackle most problems that modeling already tries to tackle with even more effectiveness.
The field of modeling is already a well flushed out field, 3D modeling is just a another addition, albeit a large addition, to the field of modeling.
An interesting note on 3D modeling is that there are different processes of modeling in the 3D space, and there are different ways of generating these models. 3D modeling includes, procedural modeling, photogrammetry, and scanning. I found procedural modeling most interesting because it can create such complex bodies and models only using a simple foundation of rules that tells the program how to generate the model. By using a simple set of rules programs can generate such complex cities and structures without much help from the programmer. Procedural modeling is most effective for large models, for example for city planning procedural modeling makes the most sense because you don’t have to manually model every single tree and sidewalk in a large city. But procedural modeling falls short when it deals with more intricate and specific types of models, it also falls short in the hardware department because it takes a lot of computing power and time to procedurally model anything. Scanning and photogrammetry makes the most sense when you want a very detailed model of an object. For example the preserving or digitizing of old relics in a museum would be great to utilize scanning or photogrammetry, because these modeling processes will capture all the small details that one needs in order to fully preserve a relic. However it falls short on anything much larger, because larger things have more vector points to scan and by increasing the amount of vector points that you need to scan it would take a lot longer and a lot more processing power to chug through all of those data points.
This week we explored Marie Saldana’s DH project as an example of how 3D modeling can be used in projects. Marie used procedural modeling to generate Roman cities. Since Roman cities very much so followed very structured rules on how they layout their cities it was her goal to write out the set of rules to procedurally generate Roman cities. It worked very well in that it gave very nice rough estimates of what a Roman city would look like without actually manually modeling all the intricate details in such a large city. However a shortfall in procedurally modeling is that since the city is created using a set of rules the program follows, how do we avoid the “trap of generalization”. As Marie puts it
“Another challenge is rule-based modeling’s inherent bias for finding isomorphism and ignoring singularities. In other words, how do we deal with the ubiquitous cases where the so-called ‘rules’ turn out to be broken?”.
Procedural modeling is also very limited in its choice of software. For Marie’s project she only used CityEngine for it and no other software, this just shows the scarceness of software that can procedurally generate models. CityEngine also has its short comings, as Marie found “[CityEngine is] currently being targeted and developed for the urban planning, production design, and gaming markets, and is therefore not necessarily optimized for scholarship or teaching.”.
Overall I thoroughly enjoyed this week’s material. 3D modeling is a huge field that will be invested in a lot in the future due to its versatility in solving research questions. Not to mention it’s also really fun to mess around with 3D objects on the computer.
Modelling Willis Hall in SketchUp
For homework I did a quick SketchUp model of Willis hall using images found in Google Maps and the Carleton Digital Archives. I used Geolocation to find the location of Willis Hall on Google Maps first and grabbed the area surrounding Willis hall. Then I created a basic shape of Willis Hall to put over the place Willis Hall should be, then I repositioned my axis so that it corresponded to the perspective I was looking at. Then I uploaded the image of Willis Hall I found in the archives that matched the perspective of the axis and subsequently adjusted the vanishing point lines to match the photo. Then I projected the texture onto the shape of the building. To get the top of the building to look like Willis, I used Command+The paint bucket tool to sample the object underneath Willis, in order to project th
e texture of the roof onto the shape.
Mapping in DH
Today in class we learned about DH projects in the context of maps. We explored various different map tools to see how they can each be used to create and modify DH projects. One platform that stood out to me was the ArcGIS mapping website. This website allows for easy mapping of data presets that are inputted or directly uploaded using an Excel sheet. You can see my finished product here on my website.
What we did was input the first colleges of the United States, using dates, location, and affiliations.
The great thing about ArcGIS is that it is very easy to show reasonably large amounts of data on a relatively easy to understand scale.
Though I found ArcGIS to be a little hard to use at first (the UI was a bit hard to understand and it wasn’t very intuitive) once I got the hang of it there were so many options that I could choose from. I could color code each college based on its affiliations with other organizations, or I could choose to group the colleges based on their establishment dates. Overall I have an okay understanding of how DH applies in terms of mapping.
After exploring how easy it was to input large amounts of data into ArcGIS I have an idea for the final group DH project. I proposed to my group mates that we should gather the data for where Carls have come from in the past and map that using an interactive map where the user can scroll through time to see where Carls have come from around the world and in the United States. I hypothesis that getting the data and inputting the data will be the hardest step of this project. Otherwise once we have the data digitalized we can easily map where Carls have come from throughout the years.
Chris, one of my group members have written down the important details of the meeting we had today about what the project should be. It is linked here.
Different Databases
I tried to reproduce the database that Stephen Ramsay described in his article. In the case of retrieving and storing data I’ve learned that in the system the organization of it is
“Complicated by the need for systems which facilitate interaction with multiple end users, provide platform-independent representations of data, and allow for dynamic insertion and deletion of information.”
A part of database design is to categorize most efficiently all the information in the database subtracting the maximum amount of repetition and “useless” data. Also in a database that has a lot of data stored a good design includes the ability to recall and store data very effectively without the need to repeat any values. This includes good categorization.
The Relational Model
The relational model attempts to factor out all the redundancies of the data by finding all the relationships between the datasets. The relational model also makes it easy to categorize large groups of data. This model also makes it easy to manipulate and search in large groups of data. This model keeps information organized very well by placing sets of data points under a main category then you can have sub-categories underneath the main headers. Many to one and one to many is a technique that makes it easy to find specific points of data in a large data set by only using a specific set of definitions.
One of the main cons of this model is that through the method of categorizing the large sets of data, personal opinion and bias is inputed into this “objective” database. By inputting this data and categorizing it, you will eliminate the factor of the neutrality of the pieces of information.
“Any time we break information down and classify it into categories, we are imposing our human world view and experiences on the information, whether consciously or not. “
This process is unavoidable, but there is not way around it, the best way of dealing with it is to input metadata in the data itself to explain the thought behind each step of the process.
Flat Databases
One of the great things about flat databases is that it is extremely to get started with it. Setting up an Excel sheet is one of the easiest things ever. It is very easy to set up flat databases for small groups of data. It’s very familiar to get knowledgeable about how to use flat databases, as it is a simple process of inputting the data into the database.
The cons of flat databases is that it isn’t powerful enough to handle big datasets. And this is a common theme of flat databases, is that it doesn’t have enough power to compute large datasets, it also doesn’t have enough “power” to categorize everything.
Carleton Timeline 1967-1991
A Week Without Coding Makes One Weak
As I take this class I am presented with the question: Should students code? Oh for sure, they should. Code is a very powerful language. As society becomes more computerized and digitized, students should know what language is building the modern society they see around them. I would even argue that not only students should learn how to code, I think everyone should learn the basics of coding.
The reason why we learn other languages is because we want to communicate with other people more efficiently and maybe in the process even try and understand their culture, and if computers are becoming more and more important in our lives, I don’t see why we shouldn’t apply the same logic as before to this new computer language. As Kirschenbaum stated
‘Knowledge of a foreign language is desirable so that a scholar does not have to rely exclusively on existing translations and so that the accuracy of others’ translations can be scrutinized.’
However there are other views that state in the digital humanities, coding and the humanities are two separate entities. As Donahue stated in aligned agreement with what I proposed above.
“The very idea that they are, a priori, separate and distinct bodies of knowledge may be the king hobgoblin of any attempts to create something that professes to be a digital humanities situated somewhere between the two.”
Thus there are perspectives that believe that these two separate fields should be kept separate.
Coding is an art form in itself. I’ve had brief experiences with coding here and there, either in summer school or with self taught coding using Codecademy. I find that coding is just another language, except that this language is a lot easier to understand. Code as a language is great simply because it is as logical as it can possibly be. Anyone can code. Coding is just a conversation that you have with the computer about a job that you want it to complete. We always have conversations with each other about what we expect to be done, all code is, is a language that puts all the semantics into a more boiled down and dense concrete language. Coding is also a lot easier to learn than other languages. Personally if I compare my learning experience with code with a foreign language, I could confidently say that learning code is a lot easier than a foreign language. If our education system made coding a requirement just like how a secondary language is a requirement then more young adults would understand how to use this new powerful tool more effectively in their lines of work. There is no field in science or in general that does not require any coding at all. Computers have ushered in a new age of information, and digitization and with this new age, every field we know of will be touched by this new technology.
Code may seem like a daunting language to learn at first, with its unfamiliar language and its intimidating structure. But honestly it’s not that hard. Once you get the key commands down, coding will become second nature to you, similar to how once you understand the basic words of a language, learning that language will become easier and easier. And as with every skill, coding takes practice, practice, and practice.
Overall I am glad that this week we were introduced to code. I believe that to be successful in any field nowadays one needs to be knowledgeable in coding. I am excited as to what we will do with this code. I am even further excited by the prospect of building my own website from scratch.
You can see the progress I made in coding here!