A brief piece of visual storytelling detailing the struggle of coming up with the right words.
One of the most exciting aspects of digital media, in my eyes, is the advent of interactivity with the information and entertainment we consume. Chapter Three in The Language of New Media was, truth be told, the first to really grab my attention thanks to Manovich’s focus on the aspect of “teleaction”. Manovich describes teleaction as “Acting over distance. In real time.” The manipulation of media such as video games has existed for decades, but the application of tools such as controllers to “real life” is now coming into it’s own; For example, a mechanical operator remotely controlling a submarine scavenging a sunken wreck while they view the scene through a camera mounted on the sub. This type of remote manipulation offers an incredible amount of spatial flexibility to many people and professions, keeping them out of dangerous situations or just providing a degree of comfort to the idea of working from home.
Teleaction has changed the face of modern warfare, as well. This passage from the chapter is prescient in regards to how war is being waged today:
Today, from thousands of miles away – as was demonstrated during the Gulf War – we can send a missile equipped with a television camera close enough to tell the difference between a target and a decoy. We can direct the flight of the missile using the image transmitted back by its camera, we can carefully fly towards the target, and using the same image, we can blow the target away. All that is needed is to position the computer cursor over the right place in the image and press a button.
NPR goes into greater detail, outlining the “benefits” of using remotely controlled machines in a theater of war.
“Engaging in combat and people being at risk have always been together until now,” Singer says. “The technology allows you to disentangle them, and now a new age of war has started.”
Machines such as drones keep soldiers and military personnel out of harms way, but there is an ongoing debate in regards to the ethics of using remotely controlled vehicles to attack targets and engage enemy combatants. One thing is for sure, though: as with many products and technologies in our lives military innovation will ensure the ideas and mechanics behind teleaction will flourish and spread to everyday society. Hopefully we’ll all someday be able to send robots with cameras and video screens to work in our place.
Jacksonville is what you make of it. I’ve visited and frequented many of the city’s various locales over the years and found that I prefer my little corner of Riverside to spend my time. These pictures are what Jacksonville is to me right now and a glimpse of the divide between urban sprawl and nature.
“Wikipedia is the best thing ever. Anyone, in the world, can write anything they want about any subject. So you know you are getting the best possible information.” – Michael Scott, The Office
I often catch myself contemplating how information-spoiled teenagers and young adults are in this era of instant digital gratification. Any question, any desire to hear a particular song or need to find out what a friend is doing at that particular moment is a few finger swipes or keyboard clicks away. I am just old enough to remember having to dig through an encyclopedia or periodical for ansers to mundane questions, and if I couldn’t find the answer myself I had to call someone who might know. Remembering these experiences made me realize that the internet is just a much bigger variant of the “old days”; you can get the info quicker but sometimes you’re still at the mercy of spotty resources and questionable credentials. Younger people may be at a disadvantage when it comes to finding the right sources, however, due to their classification as “digital natives”.
“Growing up digital”… means that more and more of the information that drives children’s daily lives is provided, assembled, filtered, and presented by sources that are largely unknown to them, or known to them in nontraditional ways.”
Folk and Apostel’s “Online Credibility and Digital Ethos” raises a few good points about the methods younger people use to find information, such as the quote above. Kids don’t remember a time before the prevalence of the internet, so they might be quick to believe any article or page that is “published”; if it’s online it must be true, right? Previous generations are a bit savvier when it comes to filtering out signal to noise info-wise, only because we are more familiar with reputable sources of information: The New York Times, Encyclopedia Britannica and CNN are examples of what we might look for when searching for credible news or answers. These institutions have the benefit of being “pre-web”, which is to say they are established entities whose online presences gain the benefit of the doubt due to their longevity in the realm of information. When Googling “What is the tomb of the unknown soldier?”, we are more apt to click on the result from The Washington Times or the official site of Arlington Cemetery itself than a random WordPress blog. While the WordPress blog may contain valuable personal anecdotes about the tomb, the Times is more likely to give an “official” answer that has been vetted and fact checked to a certain standard.
Digital natives may have the sum of human knowledge at their fingertips but it takes experience to filter out the wheat from the chaff. It seems there can be such a thing as too much information; we’re bombarded with visual stimulation almost every second of the day and learning to take what you need and leave what is unessential behind comes with time. The kids will get there, even if the internet is determined to distract them.
After spending a third of my life in the field of Information Technology, I’ve come to realize I’m very jealous of the people that can effortlessly occupy spaces of both computer science and design. I’ve met many programmers, user interface engineers, database admins and graphic designers; all have been talented in their respective disciplines but I always envied those who could both design and program efficiently, often at the same time. Unfortunately, I never became much of a serious programmer because math was never a strong subject for me. Needless to say, this assignment about type theory was a tough one to wrap my head around, even with that bit of programming background. I’ll begin with a short explanation of who invented it and what it means in regards to how applications work.
Type theory is a mathematical and logical system put forth by the philosopher and mathematician Bertrand Russell, pictured above. Russell was a highly esteemed intellectual and prolific author and is considered one of the pioneers of early computer science. He was a quintessential jack-of-all-trades, dabbling in various fields and becoming renowned for his insights into the building blocks of nature and civilization through math and logic. In addition to his pursuits in mathematics and philosophy, Russell was also a historian and conscientious objector, eventually spending time in prison for his public displays of pacifism during the first World War. His contributions to logic include Russell’s paradox, which helped give way to his introduction of type theory:
Russell’s paradox is based on examples like this: Consider a group of barbers who shave only those men who do not shave themselves. Suppose there is a barber in this collection who does not shave himself; then by the definition of the collection, he must shave himself. But no barber in the collection can shave himself. (If so, he would be a man who does shave men who shave themselves.)
Russell’s type theory consists of “any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics.” Since computational language could be described on a very basic level as a series of math problems involving words and numbers, Russell helped set the stage for how future machines would process information and maybe most importantly, catch any errors present at runtime. Eliminating these errors allows the machine to run efficiently and return valid data; after all, what good is a computational machine if it isn’t correct? One of the most intensive and time consuming parts of writing code is making sure it works the way it should. Nothing is more frustrating than spending hours working a component for a website or program, only to run it and immediately see errors and have the app crash. Type theory helped streamline the error checking process to make it easier for some programming languages to check as it goes, which drastically cuts down on development time and makes life easier for programmers.
As I said, this is a simple way to look at what turned out to be a complex idea but I think most computer users can appreciate the structure behind what we now take for granted when we run an application or play a game. Russell and his peers helped shape logic problems into manageable solutions decades before the rise of personal computing, which is incredible in and of itself. Type theory may make my brain hurt, but I am definitely happy it exists.