Posts Tagged 'Internet'

Steganography and Rhetoric in OSN

Steganography is the art to conceal a message in plain sight, a form of “security by obscurity.” It can be as simple as the “spy ink” that kids use, the hiding of information in digital pictures, up to the complex cultural messages hidden in images and texts. It is a method not only of cryptography; it is a cultural tradition used since the writers of sacred texts inserted hidden meanings so that only the initiated would understand.

Steganography has become a common practice in online social networks, used by teenagers to communicate with their peers using a medium that offers little or no privacy. An article by danah boyd shows how this works.

15M in MadridA more forceful form of steganography in online social media is found in the political discourse, in the rhetoric used with the aim to manipulate public opinion and truth. The activists in cyberspace have found new rhetorical ways that border on steganography.  Its form of the short message does not allow developing a structured explanation of facts and persuasion. The message in digital media works by making references to cultural images, events, urban myths and key persons. Some are the now well known “image-memes” that remix cultural references.

One has to be part of the new public discourse online to be able to decipher its meanings, and to be influenced by it. The activists are not the inventors of the form.  They are using a new way of communicating and conveying meaning that has grown around digital media and social networks.

To be able to influence the digital natives, political groups need to learn this code. And beyond that, they will need to create their own set of cultural references and new tropes, their own structure of meaning around pointers of signifiers. It is like creating a new set of metaphors to be used in the new political prose of the digital media. A new prose, that builds its rhetorical force around steganographic methods of concealment and the power of consensual meaning.

Advertisements

Reality Mining and Obscurity

Human Ethics is based upon 3 principles: The first is a claim that there is something bigger than the human being itself that supports every system of thought or natural law.  The second two are about how humans relate to one another in a social and political setting: Reciprocity and Anticipation. 

Reciprocity compels us to treat everyone as an equal, the proverbial “Golden Rule”.  Anticipation is the system of rules, knowledge, clues and taboos that enables us to predict how a human being will react in a certain situation, and ultimately how to evaluate and understand his or her actions in light of said rules.  The normative part of ethics if you will.  This simplification is a model of sorts to undestand Ethics as a whole.  The minute aspects of all ethical systems are varied.  But probably the majority of them will lead to this Ueber-model. 

The Model as such is a tool that enables us to understand complex behaviours when there is a lack of data or of time to analyze every aspect of a problem.  A model, if designed properly, can account for a large number of individual cases, but is only an approximation of the truth.  Many cases will be left out or details will not be considered.  And it is in the details that we really obtain meaning and identity.

So it is with the principle of Anticipation.  We create a model of the human being in a certain ethical, cultural and social setting.  We then use this model to predict and judge behaviours of individuals and groups.  The catch-phrases of the anticipatory model are everywhere in our culture as memes, sayings and “Binsenweisheiten“.  But it goes very much beyond phrases and memes.  The anticipatory system is the basis of the literary canon of western civilization, from Homer, Aquinas and Isidore to Freud, Shakespeare and Cervantes.  The Canon is our working model and hypothesis of the (western) human being.  It explains the passions, dreams, behaviours, love, comic… everything possibly human.  And it does so by force of repetition and approximation, rather than by facts and experimentation.   One of the books of the canonic scholar and literary critic Harold Bloom is aptly titled: Shakespeare: The Invention of the Human.

But now fast forward.  Chris Anderson published recently an article in Wired questioning the validity of scientific models.  His point is, why do we need models if we can go for the real thing?  Today we have enormous amounts of data and information available, and moreover, have the computational capability to process an interpret such information.  This is going to have an impact on how we see and comprehend the world.

And what happens to the human model, the canon and the ethics?.  We have not only information about wordly things like traffic and weather, we also have a lot of information about how humans behave.  What they write and care about, what they think of events, how they react to catastrophes and how they fight against political systems.  Even information on how they buy, whom do they know and speak to, and where do they travel to.  The life of the “nomadic Cyborg” (W.Mitchell) is ever more lived out in the electronic Landscapes, Opens and Commons.

The practice of extracting this information, of making sense of all the data in the computer systems is called “Reality Mining“:  In other words, how to obtain a picture of reality from raw data in a non-normative way.  Don’t make assumptions (i.e. models) about reality, just extract reality from the data and understand it.

Alex Pentland of MIT Media Lab calls this “Honest Signals”.  It is about how we can extract a clear picture of the human being, its environment and its social operations.  With this information we can then make true statements about the affairs of the person and the social group.  This could have applications in a wide variety of situations, like to determine the social sentiment of a group (a happy group or a civil unrest) from the tone of their voices.  Or the emergence of public health problems and epidemics to be able to respond faster.

Nevertheless the demise of the model of the canon and the rise of Reality Mining in its stead opens up a series of questions, ethical, political and legal, that must be pointed out. 

First of all: Privacy.  Do we as citizens want to be exposed?  One of the rights of citizenship is the one that creates my private sphere.  And consequently the social civic sphere. 

Second: The political form of free will.  This area where we as a society still have the option to act out of free will and not in a perfectly anticipated way.  What happens to free will if there is no anticipation, no measure of freedom to act outside of predictability?  We need some degree of confidentiality to mantain the anticipatory system of Ethics.

Third: Social Capital.  This is an important question for the crafting of a digital citizenship.  The right of ownership and privacy of our personal information.   Who owns my social links and spheres of influence?  Everybody uses Social Capital for personal advantage: to obtain a job, to do sell or buy stuff, to be part of a Commons like the Internet.  If my Social Capital is in the hands of social networking sites, should I receive compensation?

Here are two interesting quotes.  One from Dr. Pentland speaking of Reality Mining, of which he says that “it’s an interesting Gods-eye view” (MIT Technology Review, 2008, TR10).  Of course he is referring to the possibilities to do good.

The second from Bjarne Stroust-Rup, when asked by MIT Technology Review (Jul/Aug 2008) about the Future of the Web: “The total end of privacy.  Governments, politicians, criminals, and friends will trawl through years of accumulated data (ours and what others collected) with unbelievably sophisticated tools.  Obscurity and time passed will no longer be covers.”

Obscurity and Doubt are a necessary part of every ethical and political system.  It is a precondition for freedom and justice.  We need to make sure that we don’t lose all of it following the New Enlightment movement of Reality Mining.

Collective Memory

Some years before Vannevar Bush wrote about the Memex, the solitary Jorge Luis Borges penned a singular short story: Funes el Memorioso.   There is a good translation by Andrew Hurley, with the somewhat awkward title of Funes, His Memory.   At first it may seem strange to pair these two men in a sentence.  Borges was not very much interested in technology, science and policy, as was Bush.  His interests where in riddles, myths, language, infinite libraries and mirrors and copulation.  But both Bush and Borges where concerned with the possibility of infinite memory, classification and information retrieval. 

Funes is a young lad in a town somewhere in rural Argentina, that is in possession of a prodigious memory.  He can recall every detail in the world, just by willing to thinking about it.  He later begins to develop knowledge systems and classifications of his own.  He is a kind of idiot-savant, that eerily reminds of El Niño Fidencio.

Both Bush and Borges conceived the expanded memory as personal “appliances”, in the case of Funes his abnormal brain and in Bush’s idea the Memex machine itself, which resembled the desk of a clerk with some complex machinery and microfilm inside.

Today we have a working version of this idea of an expanded memory.  It is contained in the Web.  But it is not something personal, it is a collective memory, that is created by large number of persons, and made available for everyone to use as an Information Commons.

We have to kinds of competing collective memory systems. 

The first is the shallow web.  It is formed by millions of websites with easy to access information, created by individuals all over the world.  Google Search is the most common retrieval tool of information.  It uses the links within the websites to decide about the relevance of each website, in fact extracting by this method a system of collective cataloging and ranking of information. 

The second are meta-web systems like Wikipedia.  Wikipedia tries to obviate the need to search the shallow web for information, instead offering multi-language articles that are crafted by a collective of editors worldwide.  The idea of an Encyclopedia expanded to every aspecto fo human knowledge and endeavour.

The most evident problems of a collective memory are its accuracy and completeness.  But in the end, information, like any other human activity, is a representation of the world.  The representations reflect our ideas about ourselves.

The PC and its user

In reading the first chapters of Jonathan Zittrain’s “The Future of the Intenet” some interesting ideas come to mind.

It is JZ’s argument so far into the book that the “generative” Internet is under threat from what he sees is a pull back to the days of the (dumb) appliance and the central server,  when the user had to put up with whatever the manufacturer (TVs, radios, game consoles) or the service provider (CompuServe, AOL) chose to sell.   He sees a tipping point when PC users are finally fed up with spam, viruses, malware and ID theft.  A defining moment will then come, a kind of Internet 911, that pushes PC users to accept the change to secure appliances.

OK, so far so good.  I see the point and it is a compelling one.  But… JZ assumes that every user of a PC and the Internet wants to be an empowered generative user.  Is that true?

I think, without further evidence to present at the moment, that the Internet user has become largely a consumer that uses the Internet for the “basics”: email, news, shopping, weather and the occasional video from YouTube.  Is there really a need for a generative platform for the use of this services?  Would the average Internet user not rather favor a standard “Net-appliance” for a really low price, that offers security and ease of use, instead of the  souped-up PC with the many problems it comes with? 

Or think of the users of developing countries or from certain demographics of the developed countries, where computer literacy is not the norm.  This kind of user would also really benefit from a low cost easy to use appliance.

We should not judge what is best for the majority starting with the needs of a minority.  I agree that the generative qualities of the Internet need to be preserved,  But maybe this should happen not at the ends of the network or by preserving a PC business model that really does not help.  We should think that the best way is by having the largest possible number of users connected  (which in turn creates an incentive for companies to innovate).  But we must realize that probably the vast majority of those user would favor an appliance over a PC.   The PC is not where the generative Internet resides. 

As I move forward into JZ argument I will update this post. 


Twits

ClustrMap

linked pages

Blog Stats

  • 4,725 hits

Categories