Ryan McLaughlin: Manifesto

Essay-grading software should be used in academic institutions.

Currently, we have technology for automated grading systems for multiple-choice tests. This not only allows students to receive their grades more promptly, but also allows teachers to see the answers that most students got wrong. There is also software that detects the possibilities of cheating. The next logical step in this process is for a software that grades essays and papers. According to an article in the New York Times, “The software uses artificial intelligence to grade student essays and short written answers, freeing professors for other tasks” (Markoff). Through this introduction of software, the subjectivity of grading would no longer exist. There would also be a grading standard. Through this program, students could receive immediate feedback on their writing. Once the grade is received, the student has an opportunity to rewrite their paper in order to improve their grade. Before OpScans, it used to take a really long time for students to receive feedback on their tests; now, it only takes from about a day to about a week to receive feedback. Papers take anywhere from a week to three weeks to get back depending on their length. If academic institutions embraced this software, it would give teachers more time to focus on teaching instead of grading because they would not be bogged down by the grading of the lengthy papers of twenty to forty students. For students, by the time they receive feedback on their paper, they are no longer thinking about it—their focus is on the next subject that they are learning. If immediate feedback was given, they could rewrite their paper immediately with the topic fresh in their mind. Obviously, this essay-grading software would have to be tested and perfected in order to be implemented into academic institutions.

Technology should be tested for safety before it is released to the public.

Our society as a whole is wonderful at embracing new technology. Kevin Kelly comments, “We are good at trying first, not good at relinquishing” (226). The Amish, on the other hand, are living about 50 years behind us. Most of the technology that they use is not new and was made or created within the last 100 years. By the time that they do embrace a new technology, it’s no longer considered new to the rest of the world. It’s considered pretty old. But the Amish are smart in this way, because, “by that time, the benefits and costs are clear, the technology stable, and it is cheap” (225). I am not arguing that we need to live like the Amish, because honestly, the Amish would not be able to survive if it were not for the rest of the world living and creating new technologies. I am arguing that technology should be tested, maybe like certain drugs are tested through the FDA before they are released to the general public. Studies have proven that the using smartphones is affecting the abilities and strength of the human memory. There has been research to see what cell phone radiation does to the brain. Claims have been made that cell phones cause cancer and scramble one’s brain. Our society has become really great at embracing new technology, however, we do not always think about the long term effects it may have on us. Jim Helms writes about an experiment where lab rats were exposed to radiation from cell phones for two hours a week for over a year in comparison to a control group that was not exposed to other cell phone radiation aside from the standard that is received by everything on earth. Helms concluded, “The conclusion of the study was that the experimental group showed poorer results on a memory test than the control group” (Helms). We love new technology, but we do not think about how it might affect us. Studies should be done to see potential long-term effects before they are released.

Helms, Jim. "Cell Phones Are Affecting Memory." Examiner. The Examiner, 28 Feb. 2013. Web. 27 Mar. 2013. <http://www.examiner.com/article/cell-phones-are-affecting-memory>.

Technology, when necessary, should be used to keep us safe.

We have talked in class about the surveillance cameras in London. We also talked about the privacy issues with having cameras watch your every move. In light of recent events, I think that technology should be used to keep the public safe, privacy concerns aside. Without the cameras and footage of the Boston marathon, we would have never have been able to apprehend the suspects as quickly as the Boston police and FBI did. Through the use of various cameras in Boston, the FBI was able to release footage of the suspects a mere two days after the bombing occurred- and not the best footage either. According to multiple news outlets, the two suspects had enough ammunition and bombs for more attacks. Had the cameras not been used, the suspects would not have been apprehended so quickly, and there could have been many more attacks and more people hurt or killed. The idea of improved security through technology is not new. It started after the September 11th terrorist attacks. A Fox News article states, “However, as improved security made such attacks more difficult, terrorists have instead attempted smaller attacks in crowed, more public spaces, raising questions about the need for more surveillance cameras and other security measures while trying to balance privacy concerns” (Terror attacks in public places for US officials to balance safety, privacy concers). While on any given day, having surveillance cameras around the city might infringe on your rights, they could protect the public as well.

CourseSmart should be used in academic institutions.

It is an age-old issue of “Did you read it?” While there are some good students out there, there are also a lot of students that choose to either skim their reading or not do it at all. With a program like CourseSmart, a program that is being tested at 9 colleges, is designed to track and follow students’ progress through digital textbooks. A New York Times article says, “But CourseSmart goes further by individually packaging for each professor information on all the students in a class — a bold effort that is already beginning to affect how teachers present material and how students respond to it, even as critics question how well it measures learning” (Streitfeld). Students are monitored two ways—through their quizzes and “engagement index” through the textbook. Teachers can probably tell when their students are not reading the material based on blank faces or a lack of interaction or interest in the discussion. This program can let teacher’s know who is falling behind in class. Students can only see their engagement index if the teacher shows it to them. If the teacher is aware of a student falling behind, they can share that information and talk to the student about it. This program could also help publishing companies. If certain chapters are not being used or read in classes, they might choose to do away with that chapter in a later edition. While this program would also have to be perfected before it could potentially be used widespread, this could help teachers understand how some students are learning and their study habits.

Health Information Technology should be embraced in order to improve the managing and sharing of patient information.

In an article from the Alliance for Health Reform, health information technology is defined as, “Information processing using both computer hardware and software for the entry, storage, retrieval, sharing, and use of health care information. Two common components of HIT are electronic medical records and computerized physician order entry” (A Reporter's Toolkit: Health Information Technology). The implementation of health information technology (health IT), has been pretty slow in the United States. “For example, while one in four doctors reports using electronic health records (EHRs), fewer than one in ten is using a "fully operational" system” (A Reporter's Toolkit: Health Information Technology). “Fully operational” systems are defined as ones that, “collect and store patient data, supply patient data to providers on request, permit physicians to enter patient care orders, and assist providers in making evidence-based clinical decisions” (A Reporter's Toolkit: Health Information Technology). If health IT were embraced, there could be a significant reduction in medical errors and patient information could be shared more easily. If patient information was more easily shared through health providers, less mistakes would be made and efficiency would be improved, which would in turn save money. The article states, “According to RAND Corporation researchers, full implementation of health IT systems could produce efficiency savings as great as $77 billion per year after a 15-year adoption period” (A Reporter's Toolkit: Health Information Technology). Healthcare is extremely important. Patient records are even more important. If a dosage of medicine is written down incorrectly, it could mean the end of someone’s life. It happens more than it should. Another technology of health IT is computerized physician order entry (CPOE), which allows doctors to order tests and prescriptions digitally. This also allows for fewer errors due to scrawled prescription notes. The toolkit states, “CPOE systems check for the accuracy of prescription orders, flagging any orders that appear extreme. One study concluded that CPOE systems for prescriptions could reduce preventable medication errors by as much as 55 percent because they ensure, at a minimum, that orders are complete and legible” (A Reporter's Toolkit: Health Information Technology). In order for this adoption to happen, a lot of things need to happen. All health parties involved must be connected and use software that communicates across multiple places and is linked.

Criminal investigations should be crowdsourced through intelligence gathering only.

Again, in light of recent events, crowdsourcing has been a pretty popular topic. An article in Wired announced, “The investigation of Monday’s deadly twin bombings in Boston will rely to an extraordinary extent on crowdsourced surveillance, provided by Marathon spectators’ cellphone photos, Vine videos, and Instagram feeds” (Data for the Boston Marathon Investigation Will Be Crowdsourced). With this call to the general public, the people and Internet went wild. This call turned into a witch hunt. Just look at this article: 5 People The Internet Was Obsessed With Who Aren't Boston Bombing Suspects. Crowdsourced crime solving should not be embraced by the police or the public. This is proven by the crowdsourcing of the Boston marathon suspects. People were accused of being terrorists when they were not even suspects for the FBI. We need to let the professionals do their job. The crowdsourcing of intelligence gathering was a great success for the FBI and the Boston marathon suspects. While some surveillance cameras were used, “but they are fixed in place and have large blind spots - people, on the other hand, can provide deep context and multiple points of view of the same situation” (Wadhwa). By crowdsourcing through intelligence gathering, the FBI was able to gather footage from the public—from their pictures, videos, and what they actually witnessed firsthand. Wadhwa comments, “Like with many crowdsourcing-related activities, individuals are good at providing information or reporting events, but it is the next stage - taking action, where things often fall apart. The more passive their role, the more effective they have been” (Wadhwa).