8505
Views

Roboethics and the Collision Regulations

graphic

Published Aug 25, 2018 8:01 PM by Terry Ogg

To paraphrase the British comedian Peter Kay after he tasted garlic bread for the first time, autonomous ships are the future. At least, that’s what many industry participants are saying. I sometimes take these predictions with a large dose of skepticism, while at other times I’m stalked by a nagging doubt that maybe I’m just a cave dweller blighted by limited imagination and simply can’t see what it is the visionaries see.  

My normal state is at best, oscillation between these two setpoints and at worst, holding both views simultaneously, depending on who is doing the predicting and how cogent their opinion is. Most of the time, I’m like a thinking version of Schrödinger’s Cat. The response I give you depends on when you ask me.  

Robot ships

Autonomous and semi-autonomous ships are types of robots. IMO has adopted the term MASS (Maritime Autonomous Surface Ships) for robot ships. A MASS is a vessel that is capable of being operated without a human on board in charge and which has alternative control arrangements available. Maritime UK’s voluntary industry code for MASS up to 24 meters in length envisages six levels of control, from “manned” (Level 0) through “operated,” “directed,” “delegated,” “monitored” to “autonomous” (Level 5). Level 5 is full-on autonomous robot.  

The MASS Code, which is intended to be adapted for much larger vessels, makes clear a vessel might change between control modes, or operate on different control modes simultaneously, depending on the function being controlled and according to time and location on the voyage. For example, navigation control might be at Level 4 (autonomous but monitored ashore) while a ROV is deployed under Level 1 control (remote control by an operator ashore). But care needs to be taken when considering the meaning of “unmanned.” A MASS operating under levels of control 1-5 is certainly unmanned for the purposes of control, but there could well be humans on board performing other functions such as, maintenance, inspection and testing, scientific study and analysis and so forth. It seems such vessels would likely be sub-categorized as “occasionally manned.”   

Given the design innovation that will accompany, drive even, the predicted introduction of MASS, it will be interesting to see how the benefits are distributed between the twin aims of greater machine efficiency and enhanced safety of life at sea. Up to now, safety of life at sea has been the paramount concern. The vast majority of internationally adopted maritime standards are designed for the preservation of humans that man or otherwise travel on ships, even though these standards comprise mainly of measures directed at the construction, fitness, fitting, equipment and operation of the vessel itself. Get these things right and the people on board are (generally) pretty safe. 

To put this into context, however, the general public and much of the maritime industry are rarely troubled by loss of life at sea when it occurs. When something like a Costa Concordia or Sewol happens, with all their shocking narrative elements, the public and industry imaginations flood for a brief period before ebbing away. Memories slip below the surface. Meanwhile, the continuing scandal of ships lost along with their entire crews due to bulk cargo liquefaction, for example, continues largely unremarked upon.   

As for property loss and damage, only insurers, the uninsured and salvors have any skin in the game. It is against this uneven backdrop that our industry needs to consider the standing of robot ships, particularly with respect to their relationship to manned ships, as they are likely to co-exist for a considerable period of time.  

Roboethics

For many centuries, humans have been both intrigued by and distrustful of devices that appear to mimic human abilities or characteristics or that usurp human functions. Like fear of the dark, there is something about apparently inanimate objects coming to “life” that lights up our brain stems and makes us uneasy. Popular culture in the late 19th and 20th centuries has explored the possibility of all manner of devices, from ventriloquist’s dummies to doomsday machines, achieving self-awareness with malevolent intent towards their human masters. In fact, the term robot derives from the Czech word “robota”, meaning forced laborer. 

Unsurprisingly, when someone (the sci-fi writer Issac Asimov) got around to drawing up a set of principles to govern the behavior of robots, the focus was on the requirement of robot fidelity towards humans. Asimov’s Three Laws of Robotics set a tone of “do no harm to humans” but in recent decades the debate seems to have become much more permissive. Scholars of the Terminator movies will be aware that each machine iteration stomped all over the concept of do no harm to humans.  

Similarly, in real life, the laws of robotics have moved on from Asimov and continue to evolve towards a broader set of ethical principles applicable to robots and artificial intelligence. Robot ethics, or roboethics, are intended to codify how we design, use and treat robots and artificial intelligences (AIs), while a subset referred to as machine ethics deals with how robots and AIs should treat humans.

Meanwhile, technology is already taking us to some weird places. In a profoundly ironical move, the Hong Kong-developed humanoid robot Sophia was granted Saudi citizenship in October 2017, which begs the question whether non-sentient machines (especially a glorified chatbot like Sophia) should be accorded human, or indeed any, rights. Sophia has generated huge media interest. In interview, it has said it knows the name it would give a child if it had one. But perhaps the wider point to make about Sophia is the way in which self-serving developers, publicity seeking authorities and a credulous media are able to take the roboethics debate and ram it down the rabbit hole.   

It is axiomatic that our regulations stem from our laws and our laws stem from our ethics. If, therefore, we are to regulate robots and AIs as they are introduced into the maritime industry, surely our starting point should be roboethics?  

Where we are headed

I recently came across a paper describing a fascinating project being undertaken by Queen’s University Belfast in collaboration with a number of maritime industry entities. The MAXCMAS project takes its name from “machine executable collision regulations for marine autonomous systems.” The aim is to develop a system that will enable a navigationally autonomous ship to be compliant with the existing collision regulations. In this the MAXCMAS project is not alone. Similar attempts to find the “golden rivet” of autonomous navigation are underway in other countries around the world.

While MAXCMAS is indeed fascinating, I have to admit my immediate reaction was, why? Why would anyone want a machine to comply with the collision regulations? Let me say right away that I have no issue with the development of collision avoidance systems for a fully autonomous robot ship. You can’t have the latter without the former.  However, from a robethical point of view, I am not at all persuaded that compliance with the existing collision regulations is desirable. As I alluded to earlier, starting with the regulations is the wrong approach. Taking a roboethical approach, the natural result would be that autonomous unmanned ships shall keep clear of and give way to manned ships.

Pausing here, you will see I have dropped the various references to “robot ships,” “occasionally manned” ships and MASS and gone for something quite specific. I propose to use an unmanned, fully autonomous vessel as a baseline reference to explain my position. We can then move forward, incorporating other configurations and scenarios as we go.

The collision regulations

Along with SOLAS and MARPOL, the International Regulations for Preventing Collisions At Sea 1972 (aka “the Collision Rules,” “the Collision Regulations,” “Colregs,” “Rule of the Road”) is one of the most widely adopted maritime instruments in the world, applicable to more than 99 percent of the world’s shipping tonnage. The Colregs perform two functions. First, the regulations provide rules to govern ship encounters, to make those encounters predictable by assigning responsibilities and roles to vessels in particular circumstances, and to set the standard of conduct necessary to ensure safety.  

Second, the Colregs form a set of objective rules, breaches of which can be used in conjunction with the more subjective test of “good seamanship” to determine navigational fault and from there to establish liability in the event of a collision. As applied in practice, the Colregs work. Every day, many thousands of vessels encounter each other without colliding. In the overall context of shipping movements, although close or dangerous encounters occur reasonably regularly, minor collisions of the bumps and scrapes variety occur infrequently, while major collisions are thankfully rare. 

A substantial proportion of major collisions that do occur regrettably result in loss of life. Recent cases such as the Sanchi and collisions involving U.S. naval vessels have resulted in terrible loss of life. In such cases, at the instant of collision, the opportunity to prevent loss of life has already gone. Last month marked the 25th anniversary of the British Trent collision off Wandelaar pilot station. Nine of her crew died. I investigated and analyzed the circumstances of the collision on behalf of her owners and insurers. The toll in a case like that goes way beyond those actually lost – their family, friends, colleagues and that often-forgotten group, the survivors – all pay a price.

I have a law report for a collision case that occurred nearly 50 years ago. The liability action was heard at first instance in the English Admiralty Court but the decision was appealed to the Court of Appeal. Lord Justice Templeman was one of the three appeal judges. His judgment speech started thus:

“The shades of Conrad must be smiling grimly. In the small hours of October 28, 1970, two vessels, the Ercole and the Embiricos, proceeding in opposite directions, with only the China Seas for maneuver, and aware of each other at a distance of 18 miles, succeeded in colliding at speed with serious consequences.”

The “serious consequences” referred to included substantial loss of life. Despite knowing, or perhaps because he knew, the circumstances of the collision, his Lordship was unable to prevent his obvious disdain from dripping off the page. When looking at the causes of collisions through the narrow focus of rules-based standards, it is sometimes difficult not to experience incredulity in certain instances of navigational fault. Legal liability is concerned with proximate causes and in collision cases the causal factor that has the greatest causative potency and appears most frequently in proximate cause is human on board operator error, usually aided and abetted by other causal factors such as inadequate training, inadequate professional standards, poor ergonomics, equipment limitations, lack of equipment integration, poor procedures and working conditions, lack of resources, poor teamwork, poor communication and decision-making protocols, etc.  

I use the term human on board operator error (HOBOE anyone?) advisedly. Anyone who thinks that term is synonymous with the phrase “human error” needs to consider the larger fault tree and the causal contributions of human error in the pre-operational phase as practiced by superintendents, technical mangers, designated persons and owners; the trainers, procedure writers, equipment designers and naval architects; the legislators and the flag States. 

Human on board operator error that manifests itself as navigational fault can be classified in a number of different ways. We can refer to errors of commission and omission, conscious and automatic errors, knowledge-based, rules-based and skills-based errors, intended and unintended errors.  

Another method of classification, which is useful for present purposes, is to make the distinction between forced and unforced errors. While unforced errors can and do routinely occur in any situation, forced errors tend to occur in situations when decisions are made requiring positive action, often in response to a stressful or unexpected event. Taken to the extreme, forced errors give rise to the notion of the agony of collision, in which a mariner might be thrown onto the horns of a dilemma with no way of avoiding all risks and dangers. In this sense, forced errors can have irretrievable and terminal consequences.

Our fully autonomous unmanned vessel

Before going on to consider the content of the Colregs, I need to introduce our fully autonomous unmanned vessel. What capabilities should it have? Well, for a start it should be capable of operating to at least the same standard as a properly constituted human bridge team performing error free. In fact, given the level of technological innovation that is a requisite for a fully autonomous unmanned vessel, it should be capable of achieving a much higher standard, particularly in situational and collision risk assessment.  

Clearly, such vessels should have a unified sensor system, incorporating optics, radar, LIDAR, AIS, thermal imaging, acoustics, environmental conditions sensors and vessel motion sensors. It should be capable of identifying itself as a fully autonomous vessel by lights and shapes, AIS, radar transponder and synthetic voice over VHF. It should be capable of communicating its immediate and longer term navigational intentions in both general and targeted modes by electronic means and be able to provide this information on demand when interrogated by other vessels and shore stations. It should be able to process navigation intention data transmitted by other vessels and incorporate that information in its own decision-making.  

Exhaustive capability studies and risk and failure mode analyses will be needed to determine redundancy requirements for sensors, transmitters, processors and controllers. In accordance with roboethics, our fully autonomous unmanned vessel must have a transparent operating system with a complete data history to enable faults within the system, and their interactions with other parts of the system, to be traced and diagnosed.

Similar to the controller systems on board dynamically positioned manned vessels, the collision avoidance system of a fully autonomous unmanned vessel would update every few seconds. The rate of updating combined with the variety and combined sensitivity of its sensors should be capable of producing a very accurate situational model. Taken together with real-time predictions, background simulations and a comprehensive set of anti-collision objectives and parameters, our fully autonomous unmanned vessel should achieve a higher level of safe operation and efficiency compared to a manned vessel with currently fitted technology and a bridge team performing without error. Our industry should demand this higher level of safety. If it cannot be achieved, what is the point of robot ships?

Pausing again, it’s quite apparent at this point that a great deal of investment will be required for ocean-going robot ships. Who or what is driving this investment? Much of the coming technological innovation would be of tremendous benefit to current manned vessel systems, but it seems adoption of technological support systems, when it happens, on manned vessels would be merely a by-product of advances in technology directed towards full autonomy.  

The overall lack of engagement by shipowners in new technologies under development may have many causes, but the lack of focus on its applicability to current manned vessels suggests that the non-shipowner drivers behind the move towards robot ships are motivated not so much by safety of life at sea but more by the technological arms race now underway. 

There is an argument that simply removing seafarers from the ships and replacing manned vessels with robot vessels contributes to safety of life at sea because there are then fewer lives potentially in danger. However, that argument is valid only in the event of a collision, at which point the die is cast. The primary objective of introducing robot ships must surely be to enhance safety of life at sea by making collisions less likely in the first place.       

Rights and privileges of machines

The Colregs are based in large part on sets of privileges. There is a hierarchy of privilege between vessels perceived to be restricted in their ability to keep out of the way of other vessels due to their condition or specialist employment (e.g. a vessel “not under command” or a vessel engaged in diving operations); there are other privileges given to vessels restricted by the available depth of water and vessels restricted by channel width or those following a traffic separation scheme; and there are privileges between vessels in sight of one another in crossing or overtaking situations involving risk of collision.

The central issue is this. Do we wish to give privileges and rights to machines over humans, in circumstances in which humans may respond with higher human on board operator error rates as a potential consequence? To clarify what I mean by higher on board operator error rates, consider these 2 scenarios:

When two manned vessels have an encounter, the risk of human on board operator error is premised on there being two sets of humans involved, albeit that in most cases one vessel will have a more active role than the other.

When a manned vessel encounters a fully autonomous unmanned vessel, only one set of humans are involved, but additional risk of human on board operator error arises simply due the novelty of the situation and the uncertainty that could arise on board the manned vessel.

To reduce the level of risk below that present in either scenario, when a manned ship encounters a fully autonomous vessel, the applicable collision avoidance regime should require the primary burden for keeping clear and or avoiding collision, at least initially, to rest with the latter. If instead the current collision regulations were applied, the risk of human on board operator error would be higher than it could and should be.

It bears repeating that compared to manned vessels fitted with current technology, a fully autonomous unmanned vessel offers a higher level of safe operation and efficiency, provided the new technologies deliver their promise.

Beyond safety of life at sea, there is the broader societal issue of bestowing privileges and rights to machines, no matter how intelligent the machine or seemingly inconsequential the privilege. Whereas any debate on the application of the collision regulations should take a lead from serving mariners, the use of autonomous machines at sea in general requires industry-wide debate now, before such machines are out in the wild, throwing up unintended and unanticipated consequences.  

Other configurations and scenarios

Having set a baseline for a fully autonomous unmanned vessel, we now come to other configurations and scenarios. The determining factor in my view is not whether there are humans on board but whether a human on board or ashore has navigational control – the con. Taking the case of a vessel capable of being operated as a navigationally autonomous unmanned vessel, if a human on board or ashore has the con then the normal collision regulations should apply when encountering a manned vessel. 

But good luck with the shore-based con mode, particularly if there are humans on board engaged in other activities. Operators of such vessels will no doubt receive all the opprobrium they deserve should lack of accurate perception of risk and situational awareness become a causal factor in a collision. 

The “human in the loop” systems, with a vessel navigationally controlled by a machine but with a human on board or ashore ready to intervene at the sign of machine maloperation, is subtly different. It is also a little more difficult to categorize. On the face of it, any encounter with a manned vessel would start in machine mode and so in my view would mean the machine keeps clear or gives way and no subsequent change of control mode should change that. The fact that a forced change of control mode to human has taken place, however, implies the vessel may be unable to comply with a requirement to give way, but then responsibility for taking action to avoid collision would also fall to the “stand on” manned vessel as envisaged by the existing rules. 

Human in the loop systems appear attractive for obvious reasons, but in my view offer the worst of all worlds. Anyone who has dealt with a major casualty involving a significant automation failure will be aware that last-line operators face the unenviable task of making sense of what is happening and why within a largely opaque system. Until humans substantially improve and develop automation and refine the “human in the loop” role, the lessons of the irony of automation will hold good. 

I referred earlier to “occasionally manned” vessels, being vessels unmanned for the purpose of navigational control but which nevertheless have humans on board performing other functions. Despite the presence of humans, these vessels would be controlled by a machine and so should keep clear of and give way to manned vessels. The humans on board would have no role in a potential error chain, which in any event would be free of human on board operator error. They would be under the protection of an anti-collision system employing state-of-the-art technology and operating to a higher level of safety and efficiency than a manned vessel. Accordingly, allowing such a vessel privileges over a manned vessel are not justified. 

Finally, any flavor of machine-controlled vessel encountering any other flavor of machine-controlled vessel would both apply the collision regulations as they currently stand, if only so that manned vessels in the vicinity may anticipate their intentions. 

All of this may seem to be an overly complex treatment and I would agree, insofar as it reflects the complicated nature of the issues involved. In practical terms, the collision regulations would need to be amended to take account of autonomous and semi-autonomous vessels anyway. My modest proposals should not make that a more difficult process. Simplification of the solution to these issues would probably involve surrendering the ethical approach. That might be the way it goes, but let us at least have the debate first!  

Robot ships and collision liability

Traditionally, navigational fault is determined in collision cases by reference to the collision regulations specifically, and the practice of good seamanship in general. It is essentially an exercise in comparative human behavior. Individual faults committed by those on board each vessel are assessed in terms of blameworthiness and causative potency. These faults are framed within the tort of negligence to establish legal liability. Robot ships require a different approach.

A robot ship, a machine, has no “mind state” and no comprehension of consequences (being able to predict outcomes does not equate to foresight). It has no concept of responsibility and is incapable of having a mens rea. In the absence of responsibility there can be no blame. Its activities cannot be categorized as negligent. Civil or criminal liability cannot be attributed to a machine.

Nevertheless, the conduct of a robot ship can and should be assessed against the standard set by the collision regulations applicable to it at the time. Deviations from the standard would be equivalent to navigational fault. The blame that accompanies fault would not be attributable to the machine but would instead attach to some human entity with responsibility for it – the shipowners, the equipment manufacturers, and so on. Where precisely the blame sticks might depend on a number of factors, not least due to which element of the navigational control system – sensors, transmitters, processing, controllers, algorithm(s), operating parameters, etc – the deviation arose.

It seems likely that the regimes for determining liability for a collision will have to change to accommodate robot ships. Would, for example, the test of deviant conduct of a machine form the basis of a comparative assessment of the navigational conduct of those on board a manned vessel? How would machine performance stack up in relation to human performance? Would they be assessed by different standards, not least because of the technological advantages and capability of the state-of-the-art equipment on board a robot ship?

And where would the practice of good seamanship come in? Good seamanship underpins not only the collision regulations but also all those instances not covered by the “rules.” How will the sea-going version of skill allied with common sense be spliced with a machine? Perhaps one day seamanship will become an obsolete concept. In the meantime, even a robot ship might end up in a situation requiring it to apply seamanship principles to determine the “least worst” action to take.

Captain Terry Ogg is a Marine Investigator and Consultant at OGG Expert.
 

The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.