604
Views

Interview: Orca AI's Dor Raviv on AI-Assisted Navigation

AI-assisted collision avoidance systems increase situational awareness for mariners, according to OrcaAI
AI-assisted collision avoidance systems increase situational awareness for mariners, according to OrcaAI

Published Mar 17, 2025 2:59 PM by The Maritime Executive

 

Fully autonomous navigation is still in the testing stage for merchant ships, but AI mariner-assistance systems are already in commercial use - and their developers say that the technology is already bringing demonstrable safety improvements. In this interview, Dor Raviv, Co-Founder and CTO of Orca AI, told TME about how autonomous technology is improving safety and efficiency for commercial vessels worldwide.

Let's start with a general overview of what's new in assisted navigation and autonomy in the maritime space. What are you seeing in the marketplace?

Maritime shipping and commerce have gone through turmoil for the last few years, but it's been a very good era for AI and digitalization. At Orca AI, we've grown dramatically. We're now installed on more than 750 commercial ships with an order book exceeding 1,500 vessels. We've proven that AI perception for collision avoidance works and that it's something the market wants and needs. We've also made significant advancements in Japan with our autonomous ship collaboration. We've concluded the proof-of-concept phase and are now working on a full commercial suite for fully autonomous ships.

This growth is driven by multiple factors: increased safety requirements, new EU and IMO regulations on emissions and fuel reduction, and the challenge of recruiting crews to work aboard large ships.

More broadly, seafarers themselves have undergone massive transitions. Technologies like LLMs, ChatGPT, cloud computing, and generative AI applications are being utilized in so many day-to-day use cases. This has generated interest, trust, and a better understanding of AI's capabilities and limitations.

Two years ago, if you asked a room full of captains who had used AI in the past week, no one would know what you were talking about. Two weeks ago, I asked the same question to a room of captains in Athens, and everyone had used AI in the past week for something. This broader adoption has set the scene for what to expect from an AI system onboard a ship.

We also see an increased focus on maritime defense — shoreline monitoring, port monitoring, passive perception systems, mission autonomy, and USVs. Just last month, the carrier USS Abraham Lincoln collided with a commercial ship, which is exactly the type of situation Orca AI helps prevent.

We've been working with several navies on using cameras, which are much cheaper than radars, to provide computer vision-based perception adequate for autonomous navigation. The race for data to power AI for autonomous capabilities is in full swing.

Can you tell us about the challenges in crewing and how automation on the bridge can help?

In an autonomous voyage scenario, crew will always be onboard, but perhaps in reduced numbers. If we can prove the system understands its limitations, we can approach our clients and say, "Consider reducing watchkeepers under these conditions." That delivers immediate value — maybe reducing manning by one or two people per ship, which is massive savings.

A ship might start with a decision support system for collision avoidance while docked. As it leaves port and enters coastal navigation in high traffic, it might use a Level 2 autonomy system for collision avoidance recommendations. When entering the high seas, it could switch to Level 4 autonomy focused on fuel and emissions optimization while maintaining navigational safety — this covers about 90-95% of the voyage.Upon encountering severe weather, low visibility, high traffic, or search and rescue situations, the system would automatically notify the crew to return to the bridge and handle these delicate situations.

An officer monitors an ARPA (right) and an Orca AI display (center) during a transit (Orca AI)

At Orca, we introduce these capabilities incrementally. Using our existing hardware, we're working with ABS toward Level 2 autonomy for 2025. This means understanding the limitations of sensor fusion, computer vision, collision avoidance, and translating those into a system that knows its boundaries and when to call the crew. After we gain trust with this bottom-up approach across our fleet of 1,500 ships, the transition from Level 2 to Level 3 autonomy becomes easier because we already understand the system's limitations. The phased approach, closely connected to crew requirements, makes much more sense for adoption.

In terms of timeline, how far out do you think we are before we start seeing ships that self-navigate in specific conditions?

I believe maritime will transition faster than automotive or aviation. There are so many incentives, and while the industry is perceived as slow-moving because ships move slowly, the regulation here actually moves quite fast.          

As the IMO MASS code for voluntary implementation comes to fruition by 2027, while EU ETS regulations have already started, I think that in about two years, we'll see ships navigating autonomously in the open sea under supervision and restrictions.

The infrastructure to support this is already here. Advanced perception systems like Orca AI's are already in production at scale. This scale allows ships to share rich data between themselves — not just AIS data, but context-based information like specific events, congestion, port traffic, etc.

We already have the connectivity and bandwidth to calculate optimizations in the cloud and share them back to ships in real-time. Remote teams already have access to live data streams including video from all cameras, sensor data, and location information.

All the class societies I've spoken with are moving quickly to approve these capabilities. They've brought in experts from universities to help push autonomy and AI to the mass market. In the past two years, I've conducted about six computer vision workshops with different class societies, discussing benefits, limitations, data gaps, benchmarking, evaluation scenarios, and the human aspects of AI integration.

When the system is able to stand watch, do you anticipate there will still need to be at least one person on the bridge?

It's a good question. Realistically, such systems need to gain the trust of the crew — that's more important than gaining management's trust. It will vary depending on different crews and situations.

From a regulatory standpoint, we'll always need someone on the bridge, at least in the foreseeable future — maybe just one person who isn't looking out the window but is monitoring other systems. But perhaps after 5-10 years, these systems will be so good that, yes, why not? Solo circumnavigators go to sleep from time to time; they're not awake 24/7.

It will take time to gain the confidence and trust of crews, but fortunately, they have plenty of time to evaluate these systems because they spend so much time on the bridge. I think this transition will happen faster than in other industries.

Insurers have a clear economic incentive to improve safety. Can you tell us about your work with insurance partners like NorthStandard?

This is a very good question. Historically, insurance companies look at track records to set premiums. Now they have unprecedented amounts of context and data flowing from platforms like ours. They need our help understanding and assessing what constitutes risk.

This is brilliant because you have all the evidence to objectively prove whether these systems increase or decrease safety events, which could eventually lead to premium reductions. That's the holy grail.

We've been able to prove this to NorthStandard, and they now subsidize half the cost of our system to their clients in the first year. We have so much anonymized data backed up by chart data, video data, context events — all easy to explore, learn from, and use to understand improvements.

The goal for insurers is to become the most attractive insurance company by offering lower premiums to clients who adopt AI, automation, and data collection. It's a win-win-win scenario, and we're gaining interest from more insurers because we can empirically prove that ships using Orca are safer. We can track KPIs over time and show the data.

Are we going to get to a point where AI can identify sailboats and fishing boats for COLREGS purposes? Or make passing arrangements on VHF?

Today, all autonomous navigation systems use a camera as the primary sensor and everything else is fused to it. If I identify something as a fishing boat rather than a sailing boat, there's currently no logic in other systems to handle that distinction. You need a system with richer context to understand that if an object looks a certain way and is stopped, it's probably a fishing boat.

It's natural for humans to identify lights and patterns, but hard for AI systems to derive meaning from those cues alone. Practically speaking, when a system replaces people, it needs to operate properly without some legacy human requirements. For example, in foggy conditions where you traditionally listen for sound signals, I wouldn't trust a microphone to locate other vessels. I'd simply ask a crew member to come to the bridge and listen.

Similarly, it's extremely difficult to parse radio communications from speakers of different languages and translate that into machine code. Even with the latest ChatGPT technology, I wouldn't trust it as the sole solution for managing ship encounters. I'd want someone on the bridge in those situations.

The same applies to delicate operations like port docking. Instead of expensive technologies like lasers, LiDAR, and QR codes to position a ship precisely, maybe just use two tugboats. Don't over-solve problems or find solutions to problems that might not exist.

Dor Raviv is the Co-Founder and CTO of Orca AI, a company that provides AI-powered navigation systems for maritime vessels. Orca AI's collision avoidance system is currently installed on more than 750 commercial ships worldwide.