Manifesto: Tony Pagliaro

• We should individually monitor our daily internet use.

Every person who cares about their mental health and societal well-being should monitor their internet use, including playing online video games and smartphone use. Being connected for too much of a day can lead to higher levels of stress and feelings of being “trapped,” as well as addictive behaviors (Turkle 174, 227). Each individual person must figure out how much is right for them on a day to day basis; it’s not up to anyone else to decide. Governmental regulations seem ridiculous on front short of enacting a curfew on certain sections or the entire internet. I’ve heard of some governments doing this for online computer game servers (i.e. shutting them down overnight, every night), and the result was mostly that people either played more during the day when they were supposed to be working, or they set up their own servers and networks to play on privately. Technology can cast a firm grip on a human’s mind, but we will never be able to keep it away from people they truly want it. We should instead increase their awareness of the potential problems and inform them of the issues so that they can decide for themselves. Society can only benefit from keeping this hand-off, individualist approach, but the benefits may be slow in coming. Some people may advocate choosing governmental regulations or monitoring systems to reduce overindulgent internet use, but societal pressures can solve this problem more adequately and with less unintended consequences.


• We should do more studies and spread awareness of video game and internet addiction.

From 2010 to 2011, the number of divorce papers that cited placing video games over relationship tripled (see link below). Websites like Wowaholics Anonymous chronicle the stories of addicted gamers and offer advice and services to help “recover” (www.wowacholics.org). Video games are intense, epic, and invigorating. They are entirely too easy to get sucked into. People need their escape time, but playing video games is often more stressful than relieving. Addiction to virtual stimulation shouldn’t be hard to imagine either. I’ve experienced most of these things myself. I used to throw video game controllers at walls and stay up all night playing WoW online. I’ve since “recovered,” but I still have friends that I game with occasionally because it’s pretty much the only thing they do. I try to get them out of the house, but it seems hard to talk about video game addiction to somebody that I play video games with. That’s why we need to do more research to look into these findings. I’ve heard claims like “world of warcraft is as addicting as cocaine” that aren’t backed by any substance. Sociologists are attacking online video games from all angles, but often focus on the communities or economies that emerge within the virtual worlds rather than the effects that the games are having on people in the real world. This is a topic that we touched upon repeatedly in class, although it wasn’t always in the context of video games. Video games are just one manifestation of overstimulating entertainment media that surround us every day. They have advanced faster than our willpower has been able to keep up with, and we need to begin collecting more evidence of the effects and spread the word.
reference


• We should resist government attempts to regulate the internet.

Siva Vaidhyanathan explains the idea of public failure on pages 40-43 of The Googlization of Everything. He seems to believe that state-sponsored institutions would fare better with a shift in mindset among the public. He seems to say that “feeling good about our own choices” is not enough, but that we must go the next step and be “organizing, lobbying, and campaigning for better rules and regulations” (43). I agree that organizing and lobbying for better behavior is important for our society. However, I disagree that the lobbying should be for better “rule or regulations.” We should demand better standards and behavior from companies themselves and file suit against them when they infringe on our rights. We can buy stock in them, write glowing reviews, boycott them, or lobby against them and have a greater effect than appealing to a middleman. Markets will self-regulate themselves if given time and if consumers have enough information. The internet is the greatest tool mankind has found for disseminating information, and could therefore allow for more informed personal choice that actually makes a difference, like Vaidhyanathan wants. The irony is that regulating the internet will potentially put the clamps on reliable information sharing, which would hurt consumer choice and have unintended consequences on markets. The internet has by definition always been neutral and decentralized, and as a class we should take it upon ourselves very seriously to keep it that way.


• We should allow more freedom to experiment with genetic manipulation.

This debate brings up powerful ethical issues, but again I believe that freedom and fast dissemination of information are the keys to making the fastest and safest progress. People may point to violations of human rights, especially if the subject is a fetus, as a reason to halt genetic research. They also often bring up “playing God,” saying that we are not meant to tinker with our genetic code. Others may simply express that the potential harms outweigh any immediate good that will come of it. Yet, as Kelly points out, to “halt human gene research until such time as [it] can be proven to cause no harm… is exactly the wrong thing to do” (261). Human rights may come into play if the patient is a fetus, but most patients will be consenting, informed adults. Any institution that follows standard human research protocol should be encouraged to experiment with anything they please (that their volunteers consent to, of course). Experimenting on fetuses is a potential human rights issue, but we currently perform many medical procedures on fetuses, usually to save their lives. We are already “playing” or at least defying “God” with all modern medicine, which of course is usually employed to save lives. This brings up the final counterargument: that it would be undoubtedly good to allow more experimentation. The first genetic markers that will be found and adjusted will likely concern diseases and mental illnesses. Long before anyone even begins to attempt to improve intelligence via genetic manipulation, we will have mastered genetic manipulation to make us immune to disease. There are risks, and there will be harms, but we can either concentrate the risks and allow for quick correction, or we can spread them out by regulating heavily and hopefully arriving at the same endpoint eventually. Either way, by arriving at the same endpoint (successful use of genetic manipulation), the only difference is how soon the breakthroughs arrive in order to save as many lives as possible. On this subject, we must put our fear on the backburner and encourage truly revolutionary experimentation.

As a related side-note, on March 26, 2012, the Supreme Court threw out a lower court ruling that genes can be patented. Should we be allowed to patent gene sequences? Go ahead and mull that over for a while.


• We should focus investments on safe AI over smart AI.

Whether or not Moore’s law will hold steady, super intelligent artificial intelligence is on its way within the next century. The singularity is coming. Count on it (see this paper or visit singularity.org for more information). Just because we can make a supercomputer that’s more intelligent than a human brain doesn’t mean that we should. Not right away, at least. We must first be certain that the AI is safe to the point that it won’t, simply put, destroy us all. To do this, we have to instill the AI with certain values that manifest themselves in an ethic towards humanity. These may take the form of something like Asimov’s 3 laws of robotics, which we read about early in the semester. Each story in Asimov’s I, Robot pushes progressively farther into the future, and most deal with some problem revolving around a conflict between the rules. The problem is always resolved logically, so the rules never explicitly fail, they just don’t always work. Over time, as the characters become more and more familiar with what the technology wants, they have fewer issues with rule conflicts. Ultimately, in the concluding story, a world-governing robot actually comes to know us better than we know ourselves and begins correcting for misinformation provided by humans. This kind of robot is what we want: one that tells us what it wants while teaching us how to give it. This may seem utopian, but the destiny of AI technology is already mostly decided. It’s up to us to help steer the specific manifestation of that destiny. Again, I suggest checking out singularity.org to learn more about smart AI.


• We ought to support the inevitable march of technological progress and hope that one day our supermassive AI overlords allow us to bow to them.

Enough said.

Ok, maybe not, but most of my principles coalesce into this final one. As Kevin Kelly points out, there is an “air of inevitability” surrounding technological progress. “When the necessary web of supporting technology is established, then the next adjacent technological step seems to emerge as if on cue” (138). Human progress is slow but steady, and lately hasn’t been so slow. When progress in a sector seems to slow down, it is often only in preparation for the next revolution in that sector. As Kelly puts it, “as one exponential boom is subsumed into the next, an established technology relays its momentum to the next paradigm and carries forward an unrelenting growth” (171). As we attempt to control technologies, then, we may think that we are succeeding because progress is slowing down only to find that a new uncontrollable breakthrough was waiting in the wings. I’m betting that many of you will not agree with me here because you will advocate that technology should be controlled or slowed down. I’m going to tell you that you are wrong. Safety and control are good things in certain contexts, but not in others. I have already advocated for safe use of AI and control over internet time in this very manifesto. However, I believe that the individual should be responsible for control of his or her own life in every regard. Reducing one’s internet time is good; government-mandated internet curfews are bad. Researchers shifting attention to revolutionary genetic engineering is good; stringent regulations on research methods are bad. Shifting private investor money from smart AI to safe AI is good; diverting government funds to AI research is bad. Each of our goals can be accomplished more fully and more efficiently through swaying public opinion and taking individual responsibility than through government manipulation. By allowing the agents of the government to decide these things for us, we all become responsible for the unintended consequences that arise while exonerating wrongdoers who do actual harm within the system. We cannot keep passing off the buck of social responsibility to authority figures and begin taking decisions into our own hands. As individual agents in a global marketplace, our money may not seem to have much of an effect, but our words and opinions (and lawsuits, if justified) can. We can and should decide on our own what will thrive and what will die in our technological future. Barring some kind of cataclysmic event, much more will thrive than die, including God-like supercomputers. As we attempt to instill our new Gods with the values of the human race, would you rather they see us live in a society dominated by coercion and corruption, or one of freedom and personal responsibility? Personally, I would rather choose to bow than be forced to.