Erika Lower | Manifesto

1. We should never entirely replace analog media with digital media.

While the increasing digitization of data via optical scanning technology, Project Gutenberg, and the cloud has made said data more accessible to more people than ever, "hard copies" of this information remains equally valuable. Although we take the platforms through which we access digital information for granted, what happens if our wi-fi or electricity fails? The prosthetic knowledge we store on our devices will be rendered inaccessible, and a lack of physical backups may result in consequences that range from irritating to downright catastrophic. Without the ability to reference information, much of modern life will come screeching to a halt.

Cyberattacks and catastrophic weather are threats to the grid — and thus to our current lifestyle — so real that a group of US government organizations and power companies will be enacting a drill this month, called GridEx II, to prepare for a possible crisis of this nature. Although electromagnetic weaponry is a concept that remains mostly in the realm of science fiction for the time being, solar flares like the ones that melted telegraph lines in 1859 and knocked out power grids throughout Canada in 1989 are genuine threats to our increasingly digitized lives as well. With no infrastructure currently in place to protect our power grids from EMPs, keeping hard copies of important information on hand is a critical security measure, and analog data remains one of the best ways of protecting ourselves from some of the consequences of a large-scale technological disaster.


2. We should not create advanced artificial intelligence for its own sake.

The development of advanced artificial intelligence, for all its popularity in science fiction, is a concept inherently fraught with moral dilemmas. Philosopher Nick Bostrom points out a number of issues that might arise from such a technology:

Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations.

While many of the former qualities are highly desirable, the latter are a cause for serious concern. Nobody knows what form advanced artificial intelligence may take or what its motivations will be, and even if it were designed for a specific purpose, there is no guarantee that it will stick to its original programming. After all, human behaviors, opinions, and cognitive processes evolve over time, and a digital entity with equivalent or greater intelligence than us may very well follow a similar path, potentially to catastrophic effect —

Or not. We simply cannot know. Creating a potentially dangerous, but even more importantly, a potentially sentient intelligence for no reason other than "because we can" is a troubling idea. If the intelligence is self-aware, will it need to be granted rights? Does it have free will? The very nature of what it means to be sentient, and the moral implications thereof, will be called into question in a very dramatic way. For now, it is much safer, both physically and morally, to leave such constructs in the realm of science fiction.


3. We should carefully analyze the implications of new technologies before releasing them into the market.

In Charles Stross and Cory Doctorow's book The Rapture of the Nerds, citizens are randomly selected for a sort of technological jury duty to determine whether potentially disruptive new technologies should be released to the general public. While the novel is humorous science fiction, the idea of a review board for new technology is not unlike the policy adopted by the Amish, as Kevin Kelly discussed in his own nonfiction book.

The chance to beta-test — and then reject! — a new technology is unusual to the point of being almost unheard-of outside of small religious societies, but is a concept that could be intensely useful on a larger scale. It would be extremely difficult to implement in any kind of major way in the US, but might be possible to enact when it comes to using specific technologies in classroom settings, for example. Smartboards, iClickers, and those engineering tablet-laptops are all examples of technologies that have permanently changed the structure of the classes they've been placed in, and not always for the better. By giving users a chance to review, and then accept or reject technologies rather than being forced to adopt them permanently whatever the consequences, citizens will feel empowered and be able to make choices that are actually in their own best interests, reinstating a greater sense of control and comfort with the technologies they do use.

Kelly, Kevin. What Technology Wants. New York: Penguin, 2011. Print.

4. We ought to avoid computerized grading of essays in schools.

While computerized grading is certainly useful in some forms of assessment, such as Scantron multiple choice exams, the ongoing push to extend this system to other fields, such as essay-writing, is dangerous — not because it is impossible to do accurately, but because it encourages teachers and students to teach and learn to the test rather than to the material itself. This system results in superficial, formulaic writing rather than the in-depth analysis of a topic that would require actual understanding of the subject material.

As Mark Bousquet says in his article in The Chronicle of Higher Education:

Unsurprisingly: No reliable computerized assessment can tell whether a review of scholarly literature is an accurate representation of the state of knowledge in a field. Nor can it adjudge whether a proposed intervention into a conflict or neglected area in that field is worthy of the effort, or help a student to refine that proposed experiment or line of analysis. Of course, many of the persons we presently entrust with writing instruction lack the ability, training, or academic freedom to do so as well.

Teaching to the test, rather than for knowledge, is already a huge problem in our school systems. By mechanizing the grading of essays — one of the last reservoirs of genuine creative thought and analysis in the range of assignments presented to students today — another chance for students to genuinely learn will be reduced to memorization and pattern recognition alone once again.


5. We should not replace human jobs with machines in the name of corporate efficiency alone.

As technology grows more complex, cost-effective and easy to use in a wide range of situations, jobs once held by human beings are increasingly being replaced by machines. It is not only factory workers and farmers who are being replaced by machines — middle-class people in customer service, repair work, and many other jobs are increasingly being challenged by newly introduced mechanized systems that are faster and more efficient than any human could be.

Even those in highly creative fields are not exempt — as a science writer myself, I was filled with a sort of existential terror after reading an article in Depauw Magazine this summer about a newly-developed program that can interpret scientific research papers and translate their ideas into articles written in everyday language. The algorithm is so good, the article said, that the machine can essentially pass the Turing test — and then presented the computer-written article and a human-written article side-by-side for the reader to compare themselves. To my utter horror, I couldn't determine which was which. Should enough companies warm to the idea of using this software, my future job may have just been rendered obsolete.

In an economic climate that's precarious at best and a job market that's looking grim, the last thing we need to do as a society is mechanize jobs that can be performed perfectly well — and happily — by a human being. While sparing workers the endless drudgery and dangers of a factory floor is to be lauded, replacing significant numbers of mid- to high-level human jobs with computer programs in the name of a profit margin is counterproductive and ultimately destructive on every level.


6. We should embrace opportunities to augment ourselves with sense-extending technologies.

Human experience is a wonderful thing. We process the world around us through the senses we're born with, each one of them giving us valuable information about our environment in spectacularly complex and detailed ways. However, there is an entire world of experience beyond the range of human senses. We only observe a very limited part of the electromagnetic spectrum, we do not have innate senses of the earth's magnetic fields, and we cannot detect many forms of chemistry or radiation in the world around us — but through technology, all of this is possible, and more.

New body modifications include magnets implanted in fingertips that allow users to physically feel electromagnetic fields around them, synesthetic devices that allow the colorblind to hear their missing sense, and compass belts that give wearers a built-in sense of direction. In our increasingly digitized lives, extending our senses may be one of the best ways to re-ground us in reality and get us engaged with our physical environments in new and exciting ways.