Examining Anonymity Online (Birth of ornia)

= Essay  =

"-- Fourth Amendment to the United States Constitution, ratified December 15, 1791"

Within the short span of roughly thirty years, the Internet has undergone a rapid and tumultuous evolution from a specialized research technology whose sole purpose is linking up a handful of elite sites, to a global mass medium of communication capable of infinitely replicating virtually any form of creative (or non-creative) human output with unprecedented economic feasibility. Such was the swift and intense nature of an almost explosive growth in availability of personal/family dial-up connections during the 1990's and the subsequent relative ubiquity of 'consumer broadband' packages in more developed portions of the Unites States—and indeed, world—: that the inherent anonymity and consequent absence of accountability that partially characterize the Internet are not the result of conscious technological implementation, but rather are side effects of an explosively transformative growth on an architecture that wasn't designed to handle this amount of people. A common perception is that the mere existence of anonymity is equivalent to the inherent inability to hold a person, or indeed a machine, accountable for their actions; provoking disruptive and dangerous actions and wanton disregard for "intellectual property" legislation.

These perceived shortcomings are promised to be remedied by a rather diverse collection of technologies varying in both development stage and real-world deployment. If brought upon the Internet in the full extent of their logical conclusion, the result will be a scenario profoundly different than that of today's Internet. Among any innovation within the last hundred or so years, the Internet symbolizes the empowerment of the individual as a spontaneous outlet to teach and learn, communicate and collaborate, explore and develop; measured relatively, any increase in personal freedoms means they were gained at the expense of traditional, established societal and world power systems of government and large corporations. Thus, a reversal of the trend for increased concentration of power begins to unfold: the Internet brought with it promise to elevate any one user from the status of a subject or consumer to that of a free citizen and independent individual. The ultimate goal from the theoretical vantage point of the absolute centralization of power is to replace the peer to peer architecture of the Internet with the radically different production vs. consumption model that characterizes traditional publishing and broadcasting avenues. The barrier to entry for being a provider of content being as low as it is because of the Internet is therefore in direct contradiction with the forces attempting to maintain the status quo.

It is in the interests of the future development and evolution of humanity that each specific technology being proposed to “fix” these perceived shortcomings of the realities of the Internet in its present form is scrutinized and studied, both individually and in terms of the nature of it's interaction with other technologies in the grand scheme of things. Special attention must be payed to the manner in which these technologies restore (or create) centralized control of information flow and regulate interaction between individuals, lest hierarchy be thrust forcefully and artificially onto the technological equality that is the structure of the Internet. Of particular interest is the increasingly widespread practice of infecting digital files with Digital Restrictions Management, the ubiquity of proprietary operating systems and software, the initiatives of multiple corporations under the banner of “Trusted Computing”, and that which is a natural progression and eventual outcome of the former preconditions: certificate-based permanent, non-modifiable identification of machines and even individuals.

ӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁ

Digital Restrictions Management is the technological enforcement of a content publisher's will attached to any digital media file in varying degrees of restriction. The common buzzword for this topic is “Digital Rights Management”, but this name is a misnomer, intentionally crafted to mislead: DRM manages rights in the same manner that jail manages freedom. To which degree that DRM removes an individual's rights is entirely at the discretion of the entity that acts as a certificate dispensary and authority for the content. The traditional notion of “Pay Per Copy”, where the absence of technological prohibition allows for the exercise of fair use rights in addition to the presently illegal widespread uploading to a global pool of people. Instead, DRM can usher in the era of “Pay Per Instance” whereupon a given digitally encoded file is encrypted, and a given individual's certificate acts as the decryption key. If another person retrieves these files, they will be unable to use them without the purchaser's certificate to unlock them. In theory, such a scheme would go a long way curbing unlawful massive duplication of copyrighted material; in practice, however, DRM has thus far been a way to strip unknowing users of their fair use rights (as identified by the Copyright Act of 1976) without curbing piracy rates and subsequently rewarding those who choose to pursue illegitimate means of acquiring a given work. At the root of this conundrum is the fact that today, DRM's purpose is not to prevent piracy like the media corporations would have us blindly believe. Most piracy comes directly from the entertainment industry anyway: nearly all new leaked films and albums never even came from a store-bought CD or DVD. This is essentially all about absolute control over the medium: a method of locking out independent content producers from the inexpensive, high-quality distribution mechanism that is the Internet while simultaneously forcing legitimate consumers to potentially repurchase the same content over again lest it run out of 'activations'.

When it comes down to it, the entire controversy surrounding DRM becomes a dangerous, real-world issue because of legislation such as the United States Digital Millennium Copyright Act of 1998 which prevents by law the reverse engineering and circumvention of proprietary copy protection technology. To make the circumvention of DRM illegal is to place an unnecessary set of legal restrictions—copyright infringement is already illegal, after all—that serve to annihilate any concept of fair use rights for the average person. Given the political consensus that caused such draconian legislation as the DMCA to be passed, the ever increasing funding and development of new DRM schemes, and the unprecedented high stakes for the media conglomerates involved, a near-term deployment of a highly secure system of restricted access is within the realm of possibility.

ӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁ

Treacherous Computing, named such to highlight the anti-competitive effects a full-blown implementation of it would have on the IT market in general, has nothing to do with traditional concepts of data security or software reliability. Rather, it represents an attempt at embedding end-to-end validation of the origin and integrity of data interwoven through both hardware and software. A key concept to keep in mind is that each machine has a 'Trusted Platform Module' or TPM chip implemented on the motherboard, which serves as a unique identification certificate for that computer, among other things. Within this system, the user is not only protected from “insecure” (not signed by a recognized vendor and unmodified via its digital signature) software, but the very content of the machine's primary internal storage devices will be encrypted and signed requiring access rights and data integrity to be checked every time a file is loaded into memory. Also, if a security vulnerability is discovered in any widely deployed software, it can be remotely disabled until end users download and install a signed update containing the necessary bugfixes, often automatically. On a system with Treacherous Computing implemented on it, the ability to back up and transfer data will necessarily be limited: hardware and the compliant operating system being booted will restrict the seamless flow of data now capable of being transferred via alternate storage mediums or a network. Such low-level security must be built directly into the components of a computer that symbolize the connection between hardware and software: the EEPROM BIOS and operating system kernel. And so, a low-level operation such as booting an operating system now must have a signature validation process, with operating systems not verified as fully implementing all Treacherous Computing requirements being banned from even booting up on certain systems.

ӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁ

When analyzing the subject of anonymity in the present day Internet, it is important to keep in mind that much of the current situation lends itself to the address space crunch caused by the ubiquitous implementation of Internet Protocol version 4. IPv4 was designed for an era of the Internet where true peer to peer connections between two machines with dedicated leased lines. In such an environment, individuals are held highly accountable for their actions; even with the common practice of multiuser timesharing systems, the IP address could still be traced to a specific machine whereupon individual users' activity can be monitored by the system administrator. But since IPv4 only provides a 32-bit addressing space, or 232 = ~roughly four thousand million individual addresses, there is an inevitable shortage of unique IP addresses, seeing as the total possibility of IPv4 address combinations only amounts to roughly 60% of the world's population, not to mention reserved and special purpose address blocks in addition to the inevitable inefficiency in allocation. By 1995, when the IETF published RFC 1883 describing the specification for IPv6 with a 128-bit addressing scheme, it was already too late: the Internet was immersed in a period of expected exponential growth as floods of new users and subsequent bandwidth demand created an inertia of IPv4 dependency. A full implementation of IPv6 would require drastically changing all factors at once, including applications, operating systems, routers, etc. Instead of a full adoption of IPv6, simply keeping up with the increasing demand within the IPv4 infrastructure stretched available manpower and capital to its limits.

There have been multiple technologies that have been introduced to conserve the limited IPv4 address space; however, they tend to carry with them the unintended consequence of destroying the notion of a truly peer to peer network infrastructure and simultaneously facilitating the possibility of anonymity on the Internet. One of the first such technologies was the practice of dynamically assigning IP addresses to dial-up connections. Because the majority of users at the time only connected to the Internet for limited amounts of time at certain times of the day, it was soon recognized as a waste of addresses to statically assign and reserve an IP address per user. Rather, each dial-up line that received connections on the ISP's end would assign a different address to the user depending on which of the ISP's modems was dialed into. This has the effect of creating almost two separate classes of Internet users: while those with dynamically assigned addresses can send and receive any arbitrary connection attempt whilst connected, there is no way for other nodes to know how to contact a given user if their IP address constantly changes. Of course, meeting places independent of this communication conundrum were established, such as ICQ or Dynamic DNS, that allowed users to meet up with an invariant name and then subsequently trade address information and establish independant connections. Such a workaround is dangerous in some ways, due to the inherent unreliability of a more centralized, single-point of failure architrecture that was the downfall of Napster, among other things.

Perhaps more important than the dynamic allocation of globally valid IPv4 addresses is the recent widespread implementation of a variety of Network Address Translation technologies in routers, firewalls, or even software. NAT has the effect of isolating an internal, isolated subnet of machines with private IP addresses all sharing a single external address through the gateway device which performs port translation. Obviously, this allows for a more efficient use of the address space by assigning multiple machines a single globally valid IP. The key concept to note is that in terms of a dynamically assigned connection, at the very least for the duration of that session the address remained constant and all traffic to and from that node is handled in the same raw, unfiltered manner that a dedicated lease line would provide. Not so with NAT: the machines located on the private subnet can only communicate with other Internet nodes if they initiate the connection, otherwise traffic directed to the NAT is most often default denied. In the majority of NAT implementations on broadband connections within the residential sector, machines cannot therefore act as a server, seeing as no node can directly connect to the machine to access services being hosted. Additionally, if two nodes each behind NATs wish to communicate, they must first connect to a third node seeing as they cannot initiate a connection with each other. This has the effect of creating a centralized method of communication far more powerful than the services mentioned for finding peers using dynamically assigned addresses because all lookup requests and data transfered passes through the third-party server for potential monitoring and interception. When used in conjunction as they so often are nowadays, dynamic allocation of IP addresses and NAT cause a vital element of traditional Internet connections to be completely removed from the grasps of the end user. Because these technologies don't explicitly disrupt client's HTTP/FTP/SMTP queries, they are largely transparent to the average user who most of the time is completely unaware they have lost anything in the first place. Thus, the process of reducing the average citizen into a 'firewalled consumer' with fewer rights and abilities on the Internet is perpetuated by the expanding amount of NAT implementations present on modern day DSL and cable connections.

ӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁӁ

Imagine: nodes on such a network as envisioned here would by default deny any connection attempts from other nodes whose certificates are unknown—every outbound request would be accompanied by both the individual's and machine's unique certificate. There will be no need for usernames and passwords, because identity will be automatically validated via the certificate exchange, with all connection properties being recorded and logged by the Internet Service Providers who are queried at will by the government to hand over any such information, if not explicitly handing it over automatically straight into the depths of 'state security' branches of the government. In examining the increasingly expanding AT&amp;T Inc.'s explicit redirection of all backbone Internet traffic copied into their trunks into the data analysis machines of the NSA, we can see that such a forceful technological implementation of certificate-based communication will ultimately lead to a one-way mirror of accurate, constant, unprecedented surveillance.

I therefore propose a study on anonymity networks, specifically the second-generation Free Software project Tor and it's effectiveness in encrypting and anonymizing routing information on Internet packets. I also would like to examine the viability of the cross-platform peer-to-peer distributed data store GNUnet in its purported goals of providing electronic freedom of speech and anonymity. Of particular interest is comparing and contrasting the ideological stated reasons for these projects' existence and continued development with their present day real-world use. Through studying and understanding these technologies in the past and present, I hope to shape a better understanding of how they fit in with the IT sector and humanity at large in the future, particularly their importance after the rollout of IPv6 and the potential certificate-based identification of all machines and individuals.

"-- Inspector John Bonfield, Chicago Police Department, 1888" = See Also =

ornia

ornia Portal

Accomplished thus far ~ ornia

To be continued.... ~ ornia