Skip to main content

Intelligent Energy Management Systems

  • The overall power demand profile for one voyage, from ferry departure through transit from A to B to arrival at port, is explained. How the propulsion system and onboard systems create the total load that must be managed is emphasized.
  • The offline optimal case, in which the voyage’s future load profile is known in advance, is explained. It is shown that fuel cell power remains relatively constant during transit, then is reduced and shut down before arrival, while shore power is used at port to charge the battery.
  • The use of reinforcement learning agents to distribute fuel-cell and battery power without knowledge of the next time step is introduced. It is explained that discrete Q-learning follows the offline-optimal trend but results in frequent and unnecessary fuel-cell power adjustments.
  • It is explained that deep reinforcement learning with a deep network yields more confident and stable power decisions. It is highlighted that the resulting fuel cell and battery power profiles are very close to the offline optimum despite uncertainty.
  • The evaluation of multiple agents based on emissions, emission ratios, and costs relative to the offline optimum is explained. It is concluded that most agents achieve performance only a few percent away from the optimum, and a generic energy management framework applied to an actual ship is introduced.
Updated on Jan 18, 2026