AI policy

This document describes the AI policy of the Flanders Marine Institute (VLIZ). It applies to all AI systems used, developed, or integrated within the organization. This policy aligns with the EU AI Act and governs the deployment, risk management, and compliance of AI in our activities.

Latest update: 2026-03-10

EU AI Act Compliance

The EU AI Act establishes a regulatory framework for the safe and ethical use of AI in the European Union. It classifies AI systems by risk level and sets compliance requirements accordingly. More information can be found on the European Commission AI regulation site or through the Belgian Data Protection Authority (DPA).

VLIZ has an Ethics Committee that reviews all new and ongoing projects, ensuring they adhere to ethical standards and regulatory requirements, including AI-related considerations.

Current AI Use in VLIZ

VLIZ currently employs AI in various domains, including:

  • Image Generation – AI-created images for web publication, clearly labeled as such.

  • Image Recognition – AI-assisted analysis of (non-human) marine images, eg plankton images.

  • Remotely Operated Vehicles (ROVs) – Human operators control vessels from a remote location, supported by AI for navigation and situational awareness

  • Modelling & Simulations – AI-enhanced environmental modeling and data analysis.

  • Chatbots & AI-Assisted Coding – Use of AI tools for communication and software development.

  • Mammal Sound Recognition – AI models for detecting and classifying the ultrasonic sounds of marine mammals.

  • Marine Sound Reference Database – A reference database for sea sounds is currently under development. This database could enable future AI applications for enhanced underwater sound recognition and analysis.

For AI-related inquiries, you can contact VLIZ at: ai@vliz.be

AI Risk Classification & Compliance

AI systems used by VLIZ are classified according to the EU AI Act risk categories:

Minimal-Risk AI (in use)

Includes AI systems that have little to no impact on users' rights or safety. These systems do not require specific regulatory compliance but follow responsible AI practices. Examples:

  • AI-powered image recognition for analysis (eg plankton recognition).

  • AI coding assistants and productivity-enhancing tools.

  • AI-generated images for publications, with appropriate labeling.

Limited-Risk AI (in use)

Applies to AI systems where transparency and disclosure are required. Users must be aware that they are interacting with AI. Examples:

  • Chatbots and AI-powered communication tools that assist with inquiries.

  • AI-generated content (e.g., images, reports) that is clearly labeled as AI-created.

  • Seen AI is only used for supporting human operators remotely controlling ROVs, it is deemed limited-risk

High-Risk AI (not in use)

Covers AI systems where strict compliance, risk management, and human oversight are necessary due to potential impact on individuals or operations.

VLIZ does not employ high-risk AI systems, as AI is not used for vehicle steering, and scientific results do not have direct high-impact consequences on safety, rights, or critical decision-making.

Prohibited AI (not in use)

AI applications classified as unacceptable risk under the EU AI Act, such as biometric identification, social scoring, or manipulative AI techniques, are not used within VLIZ.

Transparency & Human Oversight

  • Users interacting with AI-driven tools are informed of AI involvement.

  • AI-generated content and automated decisions with significant impact must be explainable.

  • Human oversight mechanisms are implemented for high-risk AI applications.

  • AI models are regularly reviewed to ensure fairness, accuracy, and reliability.

Data Protection & Ethics in AI

  • AI must comply with GDPR when processing any personal data.

  • Bias detection and fairness testing is conducted for applicable AI models.

  • AI operations prioritize data security, accuracy, and ethical considerations.

  • AI systems must be auditable, ensuring transparency in their decision-making processes.

Monitoring, Reporting & Incident Handling

  • AI systems undergo periodic audits and compliance assessments.

  • AI-related risks or concerns can be reported to ai@vliz.be.

  • In the event of an AI-related incident or non-compliance, appropriate actions will be taken in line with regulatory requirements.

  • If an AI-related failure results in significant harm, authorities are informed within the required timeframe.

AI Governance & Policy Review

This policy is reviewed periodically and updated based on:

  • Regulatory changes under the EU AI Act.

  • New AI advancements and risk evaluations.

  • Findings from AI audits and incident reports.

VLIZ reserves the right to modify its AI Policy at any time without notice. Any changes to the AI Policy will be posted on this page and will become effective on the date of posting.

For any questions or concerns, contact ai@vliz.be.