Vinsamlegast notið þetta auðkenni þegar þið vitnið til verksins eða tengið í það: http://hdl.handle.net/1946/21046
The main questions posed in this thesis will be relating to a phenomenon that has been called "Lethal Autonomous Robots", LARs for short, that are "weapon systems that, once activated, can select and engage targets without further human intervention." LARs raise far-reaching concerns about the protection of life during war and peace.
This includes the question of to what extent Lethal Autonomous Robots can be programmed to comply with international humanitarian law and the standards protecting life under international human rights law. Even if they will be capable of meeting all those criteria the question remains whether their use would still in some way violate the spirit of the rules of international humanitarian law. Will they be capable of distinction and proportionate attacks? Is the taking of a life an inherently human action, aside from the acts of nature? These questions are fundamental in gauging whether Lethal Autonomous Robots can be legally put into action on the battlefield. These robots, whose workings will further be explained in chapters 3 and 4 of this thesis, have colloquially come to be known as "killer robots." Although they will in all likelihood not exist for a number of years to come, LARs are already on the drawing board, or at least being conceptualized in research facilities of many modern armies.
Chapter 2 of this thesis will give a brief overview of the development of international humanitarian law, as well as exploring the fundamental principles of international humanitarian law and challenges to the current legal regime, especially how the terrorist attacks on September 11th in 2001 affected military operations of Western militaries in ways no one could have foreseen.
International humanitarian law therefore faces a number of challenges due to changes in how wars are fought in modern times. The main topic of this thesis will, however, not be that of how previous changes in the battlespace have been a constant source of questions and challenges for those that fight for humanitarianism in war-torn areas, but that of how advances in robotics and autonomous machines are likely to bring new and even more challenging questions to the table.
Chapter 3 of the thesis will focus on the technological aspect of the weapons that can be considered predecessors to Lethal Autonomous Robots. These machines, although not fully autonomous in nature, have functions that offer advanced automation in their respective field, and have in a sense relegated human operators to the role of green-lighters, only having seconds to decide whether a weapons system engages the target it has automatically designated.
Armies have for decades fielded weapons with some autonomous capabilities, such as the MK 15 Phalanx CIWS. Drones may also be considered a stepping-stone in this development. First, remove the soldier from the battlefield. Then, remove the "soldier" entirely, replacing him with a machine. If and when these machines enter the battlefield, they will most likely be less like the Terminators and HALs of science fiction, and more like a fighting computer, without personality, without humanity, and without remorse, since the actions of the machines will only be based on programming.
Chapter 4 focuses on how, when looking forward, Lethal Autonomous Machines will be able to adhere to the fundamental principles of international humanitarian law and the legal and technological challenges Lethal Autonomous Machines are likely to face in the near future. Although fully Autonomous Lethal Machines have not (yet) emerged, a number of NGOs, humanitarian watch groups and scholars in the fields of law and robotics have warned of the potential (mis)use of Lethal Autonomous Robots in the battlespace. The debate over whether these machines should be outlawed, even before they emerge, will feature heavily in the chapter.
|Gunnar Dofri Mastersritgerð.pdf||1.52 MB||Lokaður til...22.02.2022||Heildartexti|