Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consultation de la liste des PNOs #2982

Merged
merged 82 commits into from
Mar 27, 2024
Merged

Conversation

VincentAntoine
Copy link
Collaborator

@VincentAntoine VincentAntoine commented Feb 27, 2024

Linked issues

Summary by CodeRabbit

  • New Features

    • Introduced a comprehensive system for monitoring fishing activities through prior notifications, including vessel information, catches, and risk assessments.
    • Added functionality to retrieve and filter prior notifications and logbook reports based on various criteria such as dates, locations, and vessel details.
    • Implemented a new frontend component for displaying a list of prior notifications with capabilities for filtering, sorting, and accessing detailed views.
  • Enhancements

    • Enhanced the retrieval of prior notifications with additional details like port names and seafront information.
    • Refined the vessel entity representation in the frontend to improve data handling and presentation.
  • Bug Fixes

    • Fixed issues related to the incorrect handling of null values in vessel identity properties.
  • Tests

    • Added unit and integration tests to ensure the reliability of new functionalities related to prior notifications and logbook reports.

Copy link
Contributor

coderabbitai bot commented Mar 1, 2024

Important

Auto Review Skipped

More than 25% of the files skipped due to max files limit. Skipping review to prevent low quality review.

51 files out of 160 files are above the max files limit of 100.

Walkthrough

The recent updates bring significant enhancements to the MonitorFish project, focusing on managing and displaying prior notifications. These changes enable the consultation of prior notification lists, identification of optimal ports and vessels for control, and the architecture for different types of prior notifications. The updates span across backend and frontend components to improve monitoring fishing activities effectively.

Changes

File Pattern Changes
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/* Introduced PriorNotification data class and related entities.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/filters/ReportingFilter.kt Added ReportingFilter for filtering reporting data.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/* Added methods for retrieving and manipulating data.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/* Implemented use cases for handling prior notifications.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/* Updated caching configurations and database interactions.
frontend/src/* Refactored and added components for frontend display.

Assessment against linked issues

Objective Addressed Explanation
Consult the list of prior notifications to identify optimal ports and vessels for control (#2843)
Implement architecture for different types of prior notifications and develop the pipeline (#2844)

🐇 In the code, a rabbit did play,
Bringing changes to light of day.
🌊🐟 Prior notifications now in view,
Ports and vessels, clear and true.
With types defined, the project does thrive,
MonitorFish sails, with code alive.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@@ -30,4 +34,6 @@ data class LogbookMessage(
val transmissionFormat: LogbookTransmissionFormat,
val software: String? = null,
var isSentByFailoverSoftware: Boolean = false,
val tripGears: List<LogbookTripGear>? = listOf(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cette propriété n'est pas utilisée par le front, si ?

@@ -0,0 +1,8 @@
package fr.gouv.cnsp.monitorfish.domain.entities.logbook

data class LogbookTripGear(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pourquoi ne pas utiliser la classe entities/logbook/Gear() ? J'ai l'impression qu'on a un doublon là non ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parce qu'un trip gear n'a aucune de ses props nullable et je ne crois pas qu'on ait un gearName mais je vais demander à Vincent.

frontend/src/features/SideWindow/index.tsx Show resolved Hide resolved
frontend/src/hooks/useGetFleetSegmentsAsOptions.ts Outdated Show resolved Hide resolved
frontend/src/utils/getUrlOrPathWithQueryParams.ts Outdated Show resolved Hide resolved
frontend/src/utils/getUrlOrPathWithQueryParams.ts Outdated Show resolved Hide resolved
@ivangabriele ivangabriele force-pushed the vincent/prior_notification_list branch from f59bb04 to 1c67b42 Compare March 21, 2024 01:28
@louptheron louptheron marked this pull request as ready for review March 21, 2024 10:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 21

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 67643fc and 1c67b42.
Files ignored due to path filters (7)
  • adrs/images/pno_types_architecture.png is excluded by: !**/*.png
  • datascience/src/pipeline/data/pno_type_rules.csv is excluded by: !**/*.csv
  • datascience/src/pipeline/data/pno_types.csv is excluded by: !**/*.csv
  • frontend/package-lock.json is excluded by: !**/*.json
  • frontend/package.json is excluded by: !**/*.json
  • infra/docker/docker-compose.cypress.yml is excluded by: !**/*.yml
  • infra/docker/docker-compose.puppeteer.yml is excluded by: !**/*.yml
Files selected for processing (107)
  • Makefile (1 hunks)
  • adrs/0004-prior-notifications-architecture-and-typing.md (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/Catch.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookMessage.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookOperationType.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookTripGear.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookTripSegment.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/messages/PNO.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/port/Port.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/prior_notification/PriorNotification.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/prior_notification/PriorNotificationType.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/vessel/Vessel.kt (3 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/filters/LogbookReportFilter.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/filters/ReportingFilter.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/LogbookReportRepository.kt (3 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/ReportingRepository.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/VesselRepository.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/prior_notification/GetPriorNotificationTypes.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/prior_notification/GetPriorNotifications.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/reporting/GetAllCurrentReportings.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/reporting/GetInfractionSuspicionWithDMLAndSeaFront.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/vessel/GetVessel.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/bff/PriorNotificationController.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/LogbookMessageCatchDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/LogbookMessageTripGearDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/LogbookMessageTripSegmentDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PortDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PriorNotificationDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PriorNotificationTypeDataOutput.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/LogbookReportEntity.kt (5 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/PortEntity.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/RiskFactorsEntity.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/VesselEntity.kt (3 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaLogbookReportRepository.kt (7 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaReportingRepository.kt (5 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaRiskFactorsRepository.kt (1 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaVesselRepository.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/interfaces/DBLogbookReportRepository.kt (3 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/interfaces/DBReportingRepository.kt (2 hunks)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/specifications/LogbookReportSpecification.kt (1 hunks)
  • backend/src/main/resources/db/migration/internal/V0.244__Create_pno_types.sql (1 hunks)
  • backend/src/main/resources/db/migration/internal/V0.245__Update_logbook_reports_table.sql (1 hunks)
  • backend/src/main/resources/db/migration/internal/V0.246__Create_jsonb_contains_any_function.sql (1 hunks)
  • backend/src/main/resources/db/migration/internal/V0.247__Create_unaccent_extension.sql (1 hunks)
  • backend/src/main/resources/db/migration/internal/V0.248__Create_jsonb_to_timestamp.sql (1 hunks)
  • backend/src/main/resources/db/testdata/V0.221.0__Insert_port_codes.sql (1 hunks)
  • backend/src/main/resources/db/testdata/V666.11__Insert_risk_factors.sql (1 hunks)
  • backend/src/main/resources/db/testdata/V666.2.1__Insert_more_dummy_vessels.sql (1 hunks)
  • backend/src/main/resources/db/testdata/V666.27__Insert_dummy_pno_types.sql (1 hunks)
  • backend/src/main/resources/db/testdata/V666.5.0__Insert_logbook_raw_messages_and reports.sql (4 hunks)
  • backend/src/main/resources/db/testdata/V666.5.1__Insert_more_pno_logbook_reports.sql (1 hunks)
  • backend/src/main/resources/db/testdata/json/V666.11__Insert_risk_factors.jsonc (1 hunks)
  • backend/src/main/resources/db/testdata/json/V666.2.1__Insert_more_dummy_vessels.jsonc (1 hunks)
  • backend/src/main/resources/db/testdata/json/V666.5.1__Insert_more_pno_logbook_reports.jsonc (1 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetAllCurrentReportingsUTests.kt (5 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetInfractionSuspicionWithDMLAndSeaFrontUTests.kt (5 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetLogbookMessagesUTests.kt (8 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetVesselUTests.kt (4 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/TestUtils.kt (8 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaFacadeAreasRepositoryITests.kt (1 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaLogbookRawMessageRepositoryITests.kt (3 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaLogbookReportRepositoryITests.kt (19 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaPortRepositoryITests.kt (1 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaReportingRepositoryITests.kt (10 hunks)
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaVesselRepositoryITests.kt (1 hunks)
  • datascience/docs/make.bat (1 hunks)
  • datascience/src/pipeline/flows/enrich_logbook.py (1 hunks)
  • datascience/src/pipeline/flows/init_pno_types.py (1 hunks)
  • datascience/src/pipeline/flows_config.py (2 hunks)
  • datascience/src/pipeline/parsers/ers/childless_parsers.py (1 hunks)
  • datascience/src/pipeline/parsers/ers/log_parsers.py (2 hunks)
  • datascience/src/pipeline/parsers/flux/log_parsers.py (7 hunks)
  • datascience/src/pipeline/queries/monitorfish/fleet_segments.sql (1 hunks)
  • datascience/src/pipeline/queries/monitorfish/pno_species_and_gears.sql (1 hunks)
  • datascience/src/pipeline/queries/monitorfish/pno_types.sql (1 hunks)
  • datascience/src/pipeline/queries/monitorfish/trip_dates_of_pnos.sql (1 hunks)
  • datascience/tests/conftest.py (2 hunks)
  • datascience/tests/test_data/remote_database/V666.32__Insert_test_pno_types.sql (1 hunks)
  • datascience/tests/test_data/remote_database/V666.5__Reset_test_logbook.sql (2 hunks)
  • datascience/tests/test_pipeline/test_flows/test_enrich_logbook.py (1 hunks)
  • datascience/tests/test_pipeline/test_flows/test_init_pno_types.py (1 hunks)
  • datascience/tests/test_pipeline/test_parsers/test_ers.py (1 hunks)
  • datascience/tests/test_pipeline/test_parsers/test_flux.py (2 hunks)
  • datascience/tests/test_pipeline/test_shared_tasks/test_segments.py (3 hunks)
  • frontend/.env.example (1 hunks)
  • frontend/.env.local.defaults (1 hunks)
  • frontend/.eslintrc.js (2 hunks)
  • frontend/cypress/e2e/side_window/prior_notification_list/filter_bar.spec.ts (1 hunks)
  • frontend/cypress/e2e/side_window/prior_notification_list/utils.ts (1 hunks)
  • frontend/cypress/e2e/utils/assertAll.ts (1 hunks)
  • frontend/cypress/support/commands/loadPath.ts (1 hunks)
  • frontend/scripts/generate_test_data_seeds.mjs (1 hunks)
  • frontend/src/api/api.ts (1 hunks)
  • frontend/src/components/Ellipsised.tsx (1 hunks)
  • frontend/src/constants/constants.ts (1 hunks)
  • frontend/src/constants/index.ts (1 hunks)
  • frontend/src/domain/entities/seaFront/constants.ts (1 hunks)
  • frontend/src/domain/entities/sideWindow/constants.ts (1 hunks)
  • frontend/src/domain/types/Gear.ts (1 hunks)
  • frontend/src/domain/types/port.ts (1 hunks)
  • frontend/src/domain/types/specy.ts (1 hunks)
  • frontend/src/env.d.ts (1 hunks)
  • frontend/src/features/FleetSegment/types.ts (1 hunks)
  • frontend/src/features/Logbook/Logbook.types.ts (3 hunks)
  • frontend/src/features/Logbook/LogbookMessage.types.ts (1 hunks)
  • frontend/src/features/Logbook/constants.ts (1 hunks)
  • frontend/src/features/Mission/components/MissionForm/ActionForm/shared/GearsField.tsx (2 hunks)
Files not processed due to max files limit (30)
  • frontend/src/features/PriorNotification/PriorNotification.types.ts
  • frontend/src/features/PriorNotification/api.ts
  • frontend/src/features/PriorNotification/components/PriorNotificationList/ButtonsGroupRow.tsx
  • frontend/src/features/PriorNotification/components/PriorNotificationList/FilterBar.tsx
  • frontend/src/features/PriorNotification/components/PriorNotificationList/constants.tsx
  • frontend/src/features/PriorNotification/components/PriorNotificationList/index.tsx
  • frontend/src/features/PriorNotification/components/PriorNotificationList/types.ts
  • frontend/src/features/PriorNotification/components/PriorNotificationList/utils.ts
  • frontend/src/features/PriorNotification/hooks/useGetPriorNotificationTypesAsOptions.ts
  • frontend/src/features/PriorNotification/slice.ts
  • frontend/src/features/SideWindow/Menu/index.tsx
  • frontend/src/features/SideWindow/SubMenu/index.tsx
  • frontend/src/features/SideWindow/components/Body.tsx
  • frontend/src/features/SideWindow/components/Header.tsx
  • frontend/src/features/SideWindow/components/Page.tsx
  • frontend/src/features/SideWindow/index.tsx
  • frontend/src/features/Station/slice.ts
  • frontend/src/features/Vessel/components/VesselRiskFactor.tsx
  • frontend/src/features/VesselList/VesselListFilters.tsx
  • frontend/src/hooks/useGetFleetSegmentsAsOptions.ts
  • frontend/src/hooks/useGetGearsAsTreeOptions.ts
  • frontend/src/hooks/useGetPortsAsTreeOptions.ts
  • frontend/src/hooks/useGetSpeciesAsOptions.ts
  • frontend/src/hooks/useTable/types.ts
  • frontend/src/store/reducers.ts
  • frontend/src/types.ts
  • frontend/src/ui/NoRsuiteOverrideWrapper.tsx
  • frontend/src/utils/getUrlOrPathWithQueryParams.ts
  • frontend/src/utils/nullify.ts
  • frontend/src/utils/undefinedize.ts
Files not summarized due to errors (1)
  • backend/src/main/resources/db/testdata/V666.5.0__Insert_logbook_raw_messages_and reports.sql: Error: Message exceeds token limit
Files skipped from review due to trivial changes (8)
  • backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/Catch.kt
  • backend/src/main/resources/db/migration/internal/V0.247__Create_unaccent_extension.sql
  • backend/src/main/resources/db/testdata/V666.2.1__Insert_more_dummy_vessels.sql
  • backend/src/main/resources/db/testdata/V666.5.1__Insert_more_pno_logbook_reports.sql
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaFacadeAreasRepositoryITests.kt
  • backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaVesselRepositoryITests.kt
  • datascience/docs/make.bat
  • frontend/src/components/Ellipsised.tsx
Additional comments: 197
frontend/src/domain/types/Gear.ts (1)
  • 3-3: The addition of the comment for the code field enhances code readability and understanding. Good documentation practice.
frontend/cypress/support/commands/loadPath.ts (1)
  • 6-6: Reducing the wait time from 10 seconds to 5 seconds can speed up tests, but please ensure it doesn't lead to flaky tests by verifying the impact on test stability.
frontend/cypress/e2e/utils/assertAll.ts (1)
  • 1-5: The assertAll function is a useful utility for asserting conditions on array items. Consider adding documentation or examples to demonstrate its usage and ensure the assertion function passed is correctly implemented.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookTripGear.kt (2)
  • 5-5: The addition of the comment for the gear field is a good documentation practice.
  • 3-8: Regarding the concern about potential duplication with another Gear class, it's important to evaluate if both classes serve distinct purposes within the application. If they are indeed serving similar roles, consider merging them to avoid redundancy.
datascience/src/pipeline/queries/monitorfish/fleet_segments.sql (1)
  • 10-10: Adding an ORDER BY clause to sort the results by year and segment improves the predictability and usability of the query results. Good practice.
backend/src/main/resources/db/migration/internal/V0.245__Update_logbook_reports_table.sql (1)
  • 1-5: The SQL commands to alter the logbook_reports table by adding new columns are correctly implemented. Ensure that the use of JSONB columns aligns with the application's data access patterns to optimize performance.
frontend/src/domain/types/port.ts (1)
  • 8-8: The addition of the region property to the Port interface is a useful enhancement, providing more detailed information about ports. The use of string | undefined is appropriate for flexibility.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookOperationType.kt (1)
  • 7-13: The addition of new enum constants COR, DEL, and RET to the LogbookOperationType enum, along with their descriptions, enhances the expressiveness and clarity of logbook operation types. Good practice.
backend/src/main/resources/db/migration/internal/V0.248__Create_jsonb_to_timestamp.sql (1)
  • 1-6: The SQL function jsonb_to_timestamp is well-defined and correctly uses the TO_TIMESTAMP function to convert JSONB values to timestamps. Marking it as IMMUTABLE is appropriate since it always produces the same output for the same input, which can help with query optimization.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/filters/ReportingFilter.kt (1)
  • 5-9: The ReportingFilter data class is well-designed, using nullable properties to allow for optional filtering criteria. This approach provides flexibility in querying reports based on archival status, deletion status, and types.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/port/Port.kt (1)
  • 7-10: The addition of the region property to the Port data class enhances its geographical context, and changing the default value of faoAreas to emptyList() improves the handling of FAO areas by avoiding null checks. Both changes are well-considered and improve the data model.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/prior_notification/PriorNotificationType.kt (1)
  • 5-10: The PriorNotificationType data class is well-defined, with properties that capture essential aspects of prior notifications. The use of JsonProperty for the name property ensures controlled JSON serialization/deserialization, aligning with best practices for data interchange.
datascience/src/pipeline/queries/monitorfish/pno_types.sql (1)
  • 1-15: The SQL query for selecting prior notification types and their rules is well-structured, using clear aliases and ordering to facilitate easy data retrieval. This query should effectively support the processing and presentation of prior notification type data.
frontend/src/domain/types/specy.ts (1)
  • 3-3: The added comment provides useful context about the code field being shared between Vessel and Specy types. This kind of documentation is helpful for understanding data relationships and guiding future development decisions.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/prior_notification/GetPriorNotificationTypes.kt (1)
  • 6-12: The GetPriorNotificationTypes use case is well-defined, encapsulating the logic for retrieving distinct prior notification types in a clean and reusable manner. This design adheres to the single responsibility principle and enhances code maintainability.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/VesselRepository.kt (2)
  • 6-10: The addition of nullable parameters to the findVessel function enhances its flexibility for searches. Ensure that the implementation efficiently handles null values.
  • 14-14: Renaming findVesselById to accept an Int parameter improves type safety and aligns with typical data model practices.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/LogbookMessageTripSegmentDataOutput.kt (1)
  • 5-15: The LogbookMessageTripSegmentDataOutput class and its companion object method demonstrate good use of the DTO pattern, promoting clean architecture and separation of concerns.
backend/src/main/resources/db/testdata/json/V666.2.1__Insert_more_dummy_vessels.jsonc (1)
  • 1-20: The JSONC file for inserting dummy vessel data is structured correctly. Ensure this dummy data is only used in appropriate environments (e.g., development or testing) and not in production.
frontend/cypress/e2e/side_window/prior_notification_list/utils.ts (1)
  • 3-14: Consider replacing cy.wait(500) with more deterministic wait conditions, such as waiting for specific elements to appear or AJAX calls to complete, to reduce flakiness in tests.
frontend/src/constants/constants.ts (1)
  • 16-17: The changes to the COLORS object, including the addition of white and the reordering of constants, improve the organization of the color palette. Ensure that colors marked for removal in TODO comments are not in use elsewhere in the codebase before proceeding.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/LogbookMessageTripGearDataOutput.kt (1)
  • 5-17: The LogbookMessageTripGearDataOutput class and its companion object method are well-structured. Consider the past suggestion to reuse the Gear object if it aligns with the domain model and requirements.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PortDataOutput.kt (1)
  • 6-20: The addition of the region property to the PortDataOutput class enhances the detail of port data. Ensure that region data is consistently available and accurately represented in the source Port entities.
frontend/src/features/FleetSegment/types.ts (1)
  • 9-9: The comment questioning the possibility of segmentName being undefined suggests a need for clarification on the data model. Consider refining the FleetSegment type definition to accurately reflect domain model and data requirements.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/filters/LogbookReportFilter.kt (1)
  • 3-16: The LogbookReportFilter data class introduces a comprehensive set of properties for filtering logbook reports, offering flexibility in how reports can be queried based on various criteria. This approach allows for a wide range of filtering combinations, which is beneficial for the application's functionality.
frontend/src/domain/entities/sideWindow/constants.ts (2)
  • 6-6: The addition of PRIOR_NOTIFICATION_LIST to the SideWindowMenuKey enum aligns with the PR's objectives of enhancing the management and consultation of prior notifications. This change supports the introduction of new UI elements for consulting PNOs.
  • 13-13: The corresponding label for PRIOR_NOTIFICATION_LIST in SideWindowMenuLabel is appropriately added, ensuring that the new menu item will have a user-friendly label in the UI.
backend/src/main/resources/db/migration/internal/V0.244__Create_pno_types.sql (1)
  • 1-17: The creation of pno_types and pno_type_rules tables is well-designed, offering flexibility in defining PNO types and rules. The thoughtful use of arrays for species, areas, gears, and flag states, along with sensible defaults for optional fields, supports a wide range of configurations for PNOs.
frontend/src/constants/index.ts (3)
  • 1-2: The imports for i18n-iso-countries and its French locale are appropriate, supporting internationalization efforts for a French-speaking user base.
  • 6-6: Registering the French locale for i18n-iso-countries is a good practice for ensuring that country names are correctly localized in the UI.
  • 15-23: Defining COUNTRIES_AS_ALPHA2_OPTIONS and COUNTRIES_AS_ALPHA3_OPTIONS provides flexible options for representing countries in both alpha-2 and alpha-3 formats, which is useful for different parts of the application.
frontend/src/env.d.ts (1)
  • 15-15: The addition of FRONTEND_PRIOR_NOTIFICATION_LIST_ENABLED as a feature flag for the prior notification list functionality is a good practice, allowing for flexible configuration and gradual rollout of new features.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PriorNotificationTypeDataOutput.kt (1)
  • 5-18: The PriorNotificationTypeDataOutput class and its companion object method fromPriorNotificationType provide a clean separation of concerns, converting domain entities into data output objects for API responses. This approach is consistent with best practices and enhances the maintainability of the code.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/specifications/LogbookReportSpecification.kt (1)
  • 8-16: The withVesselLengthGreaterThan method in LogbookReportSpecification provides a useful specification for filtering logbook reports by vessel length. The use of JPA and Spring Data JPA constructs for building dynamic queries is appropriate and follows best practices.
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaPortRepositoryITests.kt (2)
  • 16-16: Sorting activePorts by locode in the test findAllActive is a logical addition that ensures consistent and predictable test results.
  • 19-19: Updating the assertion for the size of activePorts from 18 to 20 likely reflects changes in the test data or the underlying data model, improving the test's accuracy.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/messages/PNO.kt (2)
  • 18-18: The addition of the catchToLand property is well-implemented and follows best practices for list initialization.
  • 22-23: The addition of the predictedLandingDatetime property is correctly implemented. Ensure consistent handling of time zones across the application when dealing with ZonedDateTime.
#!/bin/bash
# Verify consistent handling of ZonedDateTime across the application
rg --type kotlin "ZonedDateTime"
datascience/src/pipeline/queries/monitorfish/trip_dates_of_pnos.sql (1)
  • 1-26: The adjustments to the SQL query, including the time range and filtering conditions, are correctly implemented. Verify the performance of the query with the adjusted conditions, especially if the dataset is large.
#!/bin/bash
# Suggest running EXPLAIN on the query to assess performance
echo "Consider running EXPLAIN on the modified query to assess its performance with the actual dataset."
frontend/.env.example (1)
  • 13-13: The addition of the FRONTEND_PRIOR_NOTIFICATION_LIST_ENABLED configuration flag is correctly implemented. Consider documenting its purpose and usage in the project's documentation to ensure clarity for future developers.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/PortEntity.kt (1)
  • 15-15: The addition of the region property is correctly implemented. Verify if the region property can indeed be null in production to ensure data integrity.
#!/bin/bash
# Suggest checking the database schema or data to confirm the nullability of the `region` property
echo "Consider checking the database schema or existing data to confirm the nullability of the 'region' property."
frontend/.env.local.defaults (1)
  • 13-13: The update to the FRONTEND_PRIOR_NOTIFICATION_LIST_ENABLED configuration flag is correctly implemented, enabling the feature by default in local development environments.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/prior_notification/PriorNotification.kt (1)
  • 1-37: The introduction of the PriorNotification data class is well-implemented, covering a comprehensive range of properties related to prior notifications. Consider adding documentation comments for each property to enhance maintainability and understanding.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/logbook/LogbookMessage.kt (1)
  • 14-21: The addition of isEnriched and flagState properties, along with the comments for flagState, reportDateTime, and integrationDateTime, enhances the LogbookMessage data class by providing more context and information. Good job on documenting the properties clearly.
backend/src/main/resources/db/migration/internal/V0.246__Create_jsonb_contains_any_function.sql (1)
  • 1-47: The jsonb_contains_any function is well-designed and efficiently checks for the presence of specified search values within a JSONB object. Marking the function as IMMUTABLE is appropriate and enhances performance by allowing PostgreSQL to optimize calls to this function.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/bff/PriorNotificationController.kt (1)
  • 24-24: To avoid coupling domain entities directly with the frontend, consider using a DTO or a form object for the filter parameter instead of @ModelAttribute LogbookReportFilter.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/ReportingRepository.kt (1)
  • 12-29: The updates to the ReportingRepository interface, including the addition of validationDate to the save method, the introduction of the findAll method with an optional ReportingFilter parameter, and the new archive method, are well-implemented and align with the objectives of enhancing the system's functionality.
datascience/tests/test_pipeline/test_flows/test_init_pno_types.py (1)
  • 10-42: The test cases in test_init_pno_types.py are well-structured and effectively verify the functionality of the data pipeline flow for initializing PNO types and rules. The use of pd.testing.assert_frame_equal for result verification and the approach to ensure idempotency are commendable.
backend/src/main/resources/db/testdata/V666.27__Insert_dummy_pno_types.sql (1)
  • 1-18: The SQL script for inserting dummy data into the pno_types and pno_type_rules tables and resetting the sequence for pno_types_id_seq is correctly implemented and follows best practices for SQL scripts.
datascience/src/pipeline/parsers/ers/childless_parsers.py (1)
  • 37-44: The addition of the catch_to_land parameter to dynamically select attributes for weight and number of fish is a logical enhancement. To improve readability and maintainability, consider defining the attribute keys ("WL", "WT", "FL", "NF") as constants at the beginning of the module or class. This approach can make the code easier to understand and modify in the future.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/repositories/LogbookReportRepository.kt (4)
  • 11-11: The addition of the findAllPriorNotifications method with an optional filter parameter is a good practice for fetching data with optional filtering criteria. This enhances the flexibility of data retrieval based on different use cases.
  • 32-35: The update to the findTripAfterTripNumber method signature, which now includes the internalReferenceNumber and tripNumber parameters, is appropriate for the method's purpose. It aligns with the method's goal of finding trips after a specific trip number for a given vessel.
  • 46-49: The update to the updateLogbookMessagesAsProcessedByRule method signature, which now includes the ids list and ruleType string, is a necessary change for marking specific logbook messages as processed according to a rule. This method signature is clear and aligns with the method's functionality.
  • 62-62: The addition of the findDistinctPriorNotificationTypes method is a valuable enhancement for retrieving a list of unique prior notification types. This method will support functionalities that require distinct categorization or filtering based on notification types.
datascience/tests/test_data/remote_database/V666.32__Insert_test_pno_types.sql (1)
  • 1-23: The SQL script for inserting test data related to prior notification types and rules is well-structured and follows best practices. The use of DELETE statements to ensure a clean state and the correct resetting of sequences with ALTER SEQUENCE are appropriate for maintaining data consistency in a test environment.
backend/src/main/resources/db/testdata/V0.221.0__Insert_port_codes.sql (1)
  • 23-25: The SQL insert statements for adding new port entries for France, including ports in Vannes and Brest, are correctly structured and follow best practices. The explicit specification of column names and values ensures clarity and correctness in data insertion.
backend/src/main/resources/db/testdata/V666.11__Insert_risk_factors.sql (1)
  • 1-2: The comment indicating that this file is automatically generated by a local script and should not be updated directly is crucial for maintainability. It ensures that changes are made in a controlled manner through the associated .jsonc file, promoting consistency and efficiency in managing test data.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaVesselRepository.kt (2)
  • 18-39: The modifications to the findVessel function, including accepting nullable strings and using isNullOrEmpty() checks, are appropriate and enhance the function's flexibility in querying vessels based on different identifiers. This approach aligns with Kotlin best practices for handling optional parameters.
  • 56-56: Renaming the findVessel function with an Int parameter to findVesselById improves clarity and specificity in the function's purpose, which is to find a vessel by its database ID. This naming convention is more descriptive and aligns with best practices for naming functions based on their functionality.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/interfaces/DBReportingRepository.kt (1)
  • 100-104: The update to the update method signature in the DBReportingRepository interface, which now includes additional parameters for more detailed updates to reporting entities, is a logical change that aligns with the evolving requirements for handling reportings. This enhancement allows for more granular control over the updates to reporting entities.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/vessel/GetVessel.kt (4)
  • 36-48: The use of GetVesselPositions within the execute method is a good practice as it encapsulates the logic for retrieving vessel positions into a separate use case. This approach adheres to the Single Responsibility Principle, ensuring that each class or method has a single responsibility.
  • 50-52: Using async to fetch vessel information asynchronously is efficient, especially when dealing with potentially slow I/O operations like database queries. This approach helps in improving the performance of the application by not blocking the main thread.
  • 55-57: Similarly, fetching vessel risk factors asynchronously is a good use of Kotlin's coroutines. It allows for concurrent execution of database queries, which can significantly reduce the overall execution time when multiple queries are involved.
  • 61-67: The logic for appending the beacon number to the vessel information is clear and concise. Using the let function to conditionally modify the vessel object if a beacon number is present is a Kotlin idiomatic way to handle such scenarios. However, it's important to ensure that the beaconRepository.findBeaconNumberByVesselId method handles any potential exceptions internally or that there's adequate error handling surrounding this call.
#!/bin/bash
# Verify that beaconRepository.findBeaconNumberByVesselId has internal error handling
rg "fun findBeaconNumberByVesselId" --type kotlin
frontend/src/features/Logbook/LogbookMessage.types.ts (6)
  • 3-26: The namespace LogbookMessage is well-structured and provides a clear organizational context for the types related to logbook messages. This approach enhances readability and maintainability by grouping related types together.
  • 35-52: The Catch type definition uses UndefineExcept utility type effectively to ensure that the species property is always defined while other properties can be optional. This is a good practice for enforcing data integrity where certain fields are essential.
  • 72-86: The MessageCatchonboard type is well-defined, covering essential properties for onboard catches. However, as previously mentioned, the name should be corrected to MessageCatchOnboard to adhere to the camelCase naming convention.
  • 88-93: The MessagePnoType type definition is clear, but the comment on line 91 suggests replacing the pnoTypeName string with an enum. This would be a good improvement for type safety and readability, as it would restrict the possible values to a predefined set of options.

Consider replacing the pnoTypeName string type with an enum to enforce type safety and clarity.

  • 95-105: The types TripGear and TripSegment are well-defined and provide clear structures for representing gear and segment information in a trip. These types contribute to the overall readability and maintainability of the code by encapsulating related properties.
  • 107-123: The ApiFilter type definition is a good example of using TypeScript's utility types to create a flexible and reusable type for API filters. The use of Partial and Undefine allows for optional properties, which is suitable for filter objects where not all fields may be required in every query.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/entities/vessel/Vessel.kt (2)
  • 14-14: Adding a comment to clarify that flagState uses the ISO Alpha-2 country code is a good practice. It improves code readability by providing essential context for the property's expected value format.
  • 54-101: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [41-101]

The reformatting of the LIKELY_CONTROLLED_COUNTRY_CODES list improves readability by listing each country code on a separate line. This change makes it easier to read and maintain the list of country codes. However, it's important to ensure that this list is kept up-to-date with any changes in international standards or requirements.

#!/bin/bash
# Verify that the country codes are up-to-date with ISO standards
curl -s "https://www.iso.org/obp/ui/#search" | grep -oP '(?<=CountryCode">)[A-Z]{2}(?=</span>)' | sort | uniq > iso_codes.txt
diff <(sort LIKELY_CONTROLLED_COUNTRY_CODES.txt) iso_codes.txt
datascience/src/pipeline/flows/init_pno_types.py (3)
  • 18-24: The task extract_pno_types correctly uses the pd.read_csv function to extract PNO types from a CSV file. Specifying the dtype for the has_designated_ports column as bool ensures that the data is correctly typed upon loading, which is crucial for maintaining data integrity.
  • 27-35: The task extract_pno_type_rules uses the pd.read_csv function with a converters parameter to correctly parse lists from the CSV file. This approach is essential for handling complex data types in CSV files and ensures that the data is correctly typed and structured for further processing.
  • 38-92: The task load_pno_types_and_rules demonstrates a comprehensive approach to loading data into the database, including deletion of existing data and loading of new data with specific configurations for PostgreSQL array columns. The use of DDL statements to reset sequence values is a good practice to ensure database consistency. However, it's important to ensure that transactions are correctly managed to prevent partial updates in case of errors.
#!/bin/bash
# Verify transaction management in create_engine and load function implementations
rg "create_engine" --type python
rg "def load" --type python
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/reporting/GetAllCurrentReportings.kt (2)
  • 25-30: Introducing a ReportingFilter object to encapsulate filtering criteria is a good practice. It enhances code readability and maintainability by separating the filtering logic from the data retrieval logic. This approach also makes it easier to extend or modify filtering criteria in the future.
  • 36-72: The logic for setting the underCharter property based on different VesselIdentifier types is well-structured and follows best practices for conditional logic. Using when expressions in Kotlin for this purpose enhances readability and maintainability. However, it's crucial to ensure that the findUnderCharterForVessel method in lastPositionRepository handles all potential edge cases and errors gracefully.
#!/bin/bash
# Verify error handling in findUnderCharterForVessel method
rg "fun findUnderCharterForVessel" --type kotlin
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/api/outputs/PriorNotificationDataOutput.kt (2)
  • 5-34: The PriorNotificationDataOutput class is well-defined, encapsulating all relevant properties for prior notifications. This class serves as a clear and structured way to represent prior notification data in the API responses, enhancing the API's usability and readability.
  • 36-77: The companion object method fromPriorNotification is a good example of a factory method pattern, providing a clean and concise way to convert PriorNotification entities into the output format. This approach enhances code modularity and reusability. However, it's important to ensure that all data conversions and mappings are accurate and that any potential data transformation errors are handled appropriately.
#!/bin/bash
# Verify data conversion accuracy in fromPriorNotification method
rg "fun fromPriorNotification" --type kotlin
datascience/src/pipeline/queries/monitorfish/pno_species_and_gears.sql (1)
  • 1-106: The SQL query is well-structured, making extensive use of CTEs to break down the complex logic into manageable parts. This approach enhances readability and maintainability of the query. The use of DISTINCT ON and jsonb_agg functions demonstrates a good understanding of PostgreSQL's capabilities for handling JSON data and ensuring unique records. However, it's important to ensure that the query is optimized for performance, especially considering the potential size of the datasets involved.
#!/bin/bash
# Verify query performance and optimization
echo "EXPLAIN ANALYZE WITH deleted_corrected_or_rejected_messages AS (...);" > query_analysis.sql
psql -d monitorfish -f query_analysis.sql
datascience/tests/test_pipeline/test_shared_tasks/test_segments.py (3)
  • 10-17: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [13-47]

Introducing the expected_all_segments pytest fixture is a good practice for test code reusability and maintainability. This fixture provides a centralized definition of expected segment data, which can be used across multiple tests. This approach reduces duplication and makes it easier to update expected data in one place if the underlying logic changes.

  • 50-70: The test test_extract_segments_of_year effectively uses the expected_all_segments fixture to compare the extracted segments against expected data. This approach ensures that the segment extraction logic correctly filters segments based on the specified year. It's important to ensure that the test covers all relevant edge cases and variations in segment data.
#!/bin/bash
# Verify coverage of edge cases in test_extract_segments_of_year
rg "test_extract_segments_of_year" --type python
  • 74-77: The test test_extract_all_segments correctly uses the expected_all_segments fixture to validate the extraction of all segments. This test ensures that the extraction logic correctly retrieves all segments without filtering by year, which is crucial for validating the completeness of the data.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/RiskFactorsEntity.kt (1)
  • 66-97: The reformatting of the toVesselRiskFactor function improves readability without altering functionality.
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetAllCurrentReportingsUTests.kt (2)
  • 4-4: The addition of the com.nhaarman.mockitokotlin2.any import supports the use of the any() matcher in test methods, enhancing test flexibility.
  • 57-57: The modification of method calls to use any() in place of specific arguments or previous method signatures enhances test flexibility and decouples the tests from specific argument values.

Also applies to: 103-103

frontend/src/api/api.ts (1)
  • 99-99: The addition of the 'Notices' entity to the tagTypes array of the monitorfishApi object supports new features or data types and aligns with the caching strategy of RTK Query.
frontend/src/features/Logbook/Logbook.types.ts (3)
  • 18-18: The TODO comment suggests replacing LogbookMessage.LogbookMessage with LogbookMessage. It's important to verify the intended changes with the development team to ensure type consistency and clarity.
  • 68-68: The TODO comment suggests replacing LogbookMessage.Catch with LogbookMessage.Catch. Clarification with the development team is recommended to ensure the accuracy and intention behind this proposed change.
  • 85-85: The TODO comment questions potential duplication with LogbookMessage.TripGear for the type Gear. Evaluating and consolidating type definitions where possible can simplify the codebase and adhere to DRY principles.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/VesselEntity.kt (2)
  • 38-38: The added comment clarifying that the flagState property uses the ISO Alpha-2 country code is helpful for understanding the expected format and enhances code readability.
  • 90-128: The reformatting of the toVessel function improves readability without altering functionality.
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetInfractionSuspicionWithDMLAndSeaFrontUTests.kt (5)
  • 32-39: Consider adding assertions for other properties of expectedInfractionSuspicion to ensure that all relevant aspects of the InfractionSuspicion object are tested, not just seaFront and dml.
  • 41-41: The use of given for stubbing method calls is correct, but ensure that all possible scenarios are covered in your tests, including edge cases and error handling paths.
#!/bin/bash
# Verify if there are tests covering error handling paths for findVesselById and find methods.
rg --type kotlin "findVesselById" "src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases"
rg --type kotlin "find" "src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases"
  • 65-77: This test ensures that no exception is thrown when the vessel ID is not found, which is good. However, consider also verifying that the resulting InfractionSuspicion object has the expected default or null values for seaFront and dml in this scenario.
  • 86-101: It's good practice to assert the expected behavior when the vessel is not found. This test ensures no exception is thrown, but it might also be beneficial to check that the InfractionSuspicion object's properties are correctly handled (e.g., default or null values for seaFront and dml).
  • 110-129: Similar to previous comments, while it's important to ensure no exception is thrown when the district is not found, it would also be valuable to assert the expected state of the InfractionSuspicion object in this scenario, particularly the seaFront and dml properties.
frontend/scripts/generate_test_data_seeds.mjs (3)
  • 16-34: The function setJsonbSqlPropsToNull correctly processes JSON objects to set properties ending with :sql to null. However, consider adding documentation to explain the purpose of this function and the specific use case it addresses, as it involves a non-standard approach to handling JSON properties.
  • 72-96: The generateUpdateStatements function effectively generates UPDATE statements for JSONB properties. Similar to previous comments, consider adding documentation to clarify the function's purpose and the reasoning behind the specific handling of :sql properties. Additionally, extracting complex logic into helper functions could enhance readability and maintainability.
  • 107-136: This loop processes each JSONC file to generate corresponding SQL files. The logic is clear, but adding comments to describe each step of the process (e.g., reading JSONC files, stripping comments, generating SQL statements) would improve code readability and help future maintainers understand the workflow.
backend/src/main/resources/db/testdata/json/V666.11__Insert_risk_factors.jsonc (1)
  • 12-12: Consider using null instead of empty strings ("") for fields like external_immatriculation and post_control_comments to explicitly denote 'no value'. This can help distinguish between an actual empty string value and the absence of a value.

Also applies to: 27-27, 73-73, 88-88

frontend/.eslintrc.js (1)
  • 18-18: The modifications to the ESLint configuration, specifically allowing certain console methods and customizing the @typescript-eslint/no-unused-vars rule, are well-considered improvements for code quality and maintainability.

Also applies to: 107-112

backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaReportingRepository.kt (1)
  • 14-14: The enhancements to JpaReportingRepository, including the use of EntityManager and modifications to method signatures, are well-considered improvements for handling complex queries and transactions. Ensure that the use of EntityManager adheres to best practices for database interaction.

Also applies to: 23-23, 26-29, 38-41, 52-55, 65-87, 105-108

backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaLogbookRawMessageRepositoryITests.kt (1)
  • 5-5: Changing the import to use the general Assertions class from AssertJ is a good practice for simplifying import statements and improving code readability.

Also applies to: 116-116

frontend/src/features/Mission/components/MissionForm/ActionForm/shared/GearsField.tsx (1)
  • 27-27: Replacing DeepPartial with PartialDeep from 'type-fest' and updating the type assertion for meta.error are good practices for improving type safety and code clarity. These changes align with best practices for TypeScript usage.

Also applies to: 47-47

datascience/tests/conftest.py (1)
  • 217-219: Adding pytest_plugins to share fixtures between modules is a good practice for reusability and maintainability of test code. However, ensure that the referenced fixtures are correctly defined in the specified modules to avoid runtime errors.
Makefile (1)
  • 50-52: The addition of the update-test-data target in the Makefile is a useful enhancement for automating the generation of test data seeds. It's important to ensure that the script generate_test_data_seeds.mjs is properly documented and maintained to prevent future issues with test data consistency.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/interfaces/DBLogbookReportRepository.kt (3)
  • 96-99: The addition of internalReferenceNumber and tripNumber parameters to findFirstAcknowledgedDateOfTrip enhances the function's specificity in querying. Ensure that all calls to this function have been updated to include these new parameters.
  • 168-171: Adding ids and ruleType parameters to updateERSMessagesAsProcessedByRule allows for more targeted updates of ERS messages. It's important to verify that the logic for determining which messages to mark as processed aligns with the intended business rules.
  • 186-194: The new function findDistinctPriorNotificationType is designed to retrieve distinct prior notification types, which could be useful for filtering or categorization purposes in the UI. Ensure that the returned data is correctly handled and displayed in the relevant components.
datascience/src/pipeline/parsers/ers/log_parsers.py (1)
  • 219-236: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [222-251]

The enhancements to parse_pno for handling predicted_landing_datetime_utc and distinguishing between catchOnboard and catchToLand are significant improvements for data accuracy and usability. Ensure that these new fields are properly integrated into downstream data processing and analytics pipelines.

backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetVesselUTests.kt (3)
  • 44-123: The reorganization of Position object initialization in the tests improves readability and maintainability. It's a good practice to keep test data creation clear and concise.
  • 127-127: Renaming findVessel to findVesselById in vesselRepository clarifies the function's purpose and improves code readability. Ensure that all references to this function have been updated accordingly.
  • 132-150: Refactoring the execute method call in GetVessel for consistency is a good practice. It helps in maintaining a uniform code structure across the project. Ensure that the refactored method is thoroughly tested to confirm its behavior remains unchanged.
frontend/cypress/e2e/side_window/prior_notification_list/filter_bar.spec.ts (8)
  • 14-24: The test for filtering prior notifications by countries correctly sets up the intercept and asserts the presence of table rows after filtering. This ensures the filtering functionality works as expected for country criteria.
  • 26-38: The test for filtering by fleet segments is well-structured, with proper setup and validation steps. It verifies the application's ability to filter prior notifications based on fleet segment criteria.
  • 40-50: This test ensures that the filtering by species functionality is working correctly by checking the table rows after applying the filter. It's important to maintain such tests to ensure feature integrity over time.
  • 52-70: The test for filtering by gears demonstrates the application's capability to filter based on gear codes. The commented-out code (lines 64-65) suggests a potential improvement area in the UI for handling gear code selection.
  • 72-92: The tests for filtering by last control date effectively validate both "more than 3 months ago" and "less than 1 month ago" scenarios. These tests are crucial for ensuring the accuracy of control date-based filtering.
  • 94-121: The test for filtering by arrival date (default) checks the functionality with the expected time range. It's a good practice to validate default filtering behavior to ensure user expectations are met.
  • 123-146: This test for filtering by a specific arrival date range ensures that users can narrow down the list of prior notifications based on their requirements. It's important for enhancing the usability of the filtering feature.
  • 148-208: The tests for filtering by ports and type are well-implemented, covering important aspects of the filtering functionality. These tests help in ensuring that users can effectively filter prior notifications based on various criteria.
datascience/src/pipeline/parsers/flux/log_parsers.py (1)
  • 206-218: The update to parse_pno to handle ONBOARD and UNLOADED catches separately is a good enhancement. Ensure that this new parsing logic integrates well with downstream processing and data consumption.
#!/bin/bash
# Verify integration of parse_pno changes with downstream processing
# This is a placeholder for a more specific verification script
echo "Ensure downstream processing correctly handles 'catchOnboard' and 'catchToLand' data."
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/entities/LogbookReportEntity.kt (2)
  • 70-84: The addition of new properties to LogbookReportEntity supports the PR's objectives well. Ensure that these properties are correctly integrated with the database layer and are used appropriately in the entity's methods.
#!/bin/bash
# Verify integration of new properties with database and their usage in methods
# This is a placeholder for a more specific verification script
echo "Check database schema for new properties and review usage in entity methods."
  • 107-204: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [87-202]

The introduction of new functions (fromLogbookMessage, toPriorNotification, deserializeJSONList) is a positive step towards supporting the PR's objectives. Ensure that these functions are correctly implemented and integrated with the rest of the system.

#!/bin/bash
# Verify correctness of new functions and their integration
# This is a placeholder for a more specific verification script
echo "Review implementation of new functions and check their integration."
datascience/src/pipeline/flows_config.py (1)
  • 38-38: The addition of init_pno_types to the data pipeline flows is a key enhancement. Ensure that the flow is correctly configured and integrates well with the rest of the data pipeline.
#!/bin/bash
# Verify configuration and integration of init_pno_types flow
# This is a placeholder for a more specific verification script
echo "Check configuration of init_pno_types flow and its integration with the data pipeline."
backend/src/main/resources/db/testdata/json/V666.5.1__Insert_more_pno_logbook_reports.jsonc (1)
  • 1-219: The test data added for PNO logbook reports appears comprehensive and well-structured. Ensure that the data correctly represents realistic scenarios and is used effectively in testing the new functionality.
#!/bin/bash
# Verify correctness and comprehensiveness of test data
# This is a placeholder for a more specific verification script
echo "Review test data for PNO logbook reports for correctness and comprehensiveness."
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaReportingRepositoryITests.kt (15)
  • 8-8: Adding the ReportingFilter import suggests new functionality related to filtering reportings. Ensure that this new class is thoroughly tested, especially if it involves complex filtering logic.
  • 25-38: The test save Should save a reporting from the pending alert correctly sets up a PendingAlert instance and asserts various properties after saving and retrieving it. However, consider adding assertions for all properties of PendingAlert to ensure complete coverage.
  • 62-81: Similar to the previous comment, the test save Should save a reporting sets up a Reporting instance but only asserts a subset of its properties. Expanding the assertions to cover all properties would enhance the test's thoroughness.
  • 106-124: The test save Should save a reporting When no vessel identifier given demonstrates handling cases where optional properties are not provided. This is good practice for ensuring robustness. However, ensure that the absence of the vesselIdentifier does not affect other functionalities unexpectedly.
  • 148-153: The method findCurrentAndArchivedByVesselIdentifierEquals is tested for its ability to return current and archived reporting. It's good to see that the test checks for both current and archived states. Consider adding a test case to verify the behavior when no matching records are found.
  • 174-179: This test case for findCurrentAndArchivedByVesselIdentifierEquals with a filtering date is a good example of testing for specific scenarios. It's important to also test edge cases, such as the exact moment of transition between dates.
  • 219-236: The archive test method correctly asserts the change in the isArchived flag. To further improve this test, consider verifying that other properties of the reporting remain unchanged after archiving.
  • 244-261: The delete test method is crucial for ensuring that reportings can be correctly marked as deleted. It would be beneficial to also test the behavior when attempting to delete a reporting that does not exist or has already been deleted, to ensure the system handles these cases gracefully.
  • 267-274: The test findAll Should return non-archived reportings is straightforward and checks for the expected behavior. To enhance this test, consider adding more diverse data to the setup to ensure the filter works correctly in various scenarios.
  • 276-284: Similarly, the test for returning non-deleted reportings is clear and to the point. Adding more complex scenarios, such as mixed states of reportings (some deleted, some not), would provide a more robust test of the filtering functionality.
  • 287-297: The test for returning ALERT & INFRACTION_SUSPICION reportings is a good example of testing specific filtering criteria. It would be beneficial to also include negative test cases, where reportings of other types are present but not returned by the filter.
  • 300-317: The comprehensive test for returning non-archived, non-deleted, ALERT & INFRACTION_SUSPICION reportings is well-structured. Consider testing the interaction of filters more extensively, such as cases where one filter might override another.
  • 324-335: The update Should update a given InfractionSuspicion test method is crucial for ensuring that updates are applied correctly. It's good to see thorough assertions on the updated properties. Additionally, verify that unrelated properties are not inadvertently modified during the update process.
  • 357-365: The test for updating a given Observation follows a similar pattern to the previous update test. It's important to ensure that the update functionality is consistent across different types of reportings. Consider adding tests for scenarios where the update should fail, such as updating a non-existent reporting.
  • 384-392: The test for converting an InfractionSuspicion to an Observation is an interesting case of testing more complex behavior. Ensure that the conversion process is thoroughly tested, including verifying that no unintended data loss or corruption occurs during the conversion.
datascience/tests/test_pipeline/test_parsers/test_ers.py (1)
  • 305-307: The addition of catchToLand and predictedLandingDatetimeUtc fields in the test_pno_parser function suggests new parsing requirements. Ensure that these fields are correctly parsed from the XML and that their formats are validated, especially since dates and times can have various formats.
datascience/src/pipeline/flows/enrich_logbook.py (8)
  • 20-24: The function extract_pno_types uses a hard-coded query file path and database name. Consider externalizing these configurations to make the code more flexible and environment-agnostic.
  • 58-81: The function extract_pno_trips_period performs a database extraction based on a period. It's good practice to handle potential exceptions that might arise from database operations or when the expected data format is not met (e.g., if dates.loc[0, ...] fails due to an empty DataFrame).
  • 84-96: The function extract_pno_species_and_gears is well-structured for extracting specific data based on given periods. However, consider adding error handling for database connectivity issues and validating the returned DataFrame to ensure it meets expectations before proceeding.
  • 99-186: The compute_pno_segments function uses DuckDB to perform complex data processing. Ensure that the SQL query is optimized for performance, especially considering the potential size of the pno_species_and_gears and segments DataFrames. Additionally, validate the assumptions about the data structure and content to prevent runtime errors.
  • 189-298: Similar to compute_pno_segments, the compute_pno_types function performs significant data processing. Given the complexity of the SQL query, it's crucial to ensure that the query is efficient and that the data structures used are validated for correctness. Consider adding explicit error handling for database operations and data integrity checks.
  • 302-322: The merge_segments_and_types function merges two DataFrames on a specific key. Ensure that the DataFrames are indexed correctly to avoid potential merge issues. Additionally, consider handling cases where the merge might result in unexpected outcomes due to missing or duplicate keys.
  • 399-438: The extract_enrich_load_logbook function orchestrates the extraction, enrichment, and loading process. Given its complexity, ensure that each step is robustly implemented with appropriate error handling and validation. Additionally, consider the memory consumption implications of processing large datasets and optimize accordingly.
  • 441-474: The Prefect flow definition is well-structured for orchestrating the enrichment process. Ensure that the parameters and task dependencies are correctly configured to prevent runtime issues. Additionally, consider adding error handling and validation for the flow parameters to ensure they meet expected criteria.
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/GetLogbookMessagesUTests.kt (6)
  • 62-70: The instantiation of GetLogbookMessages is consistent and readable. However, consider using a setup method (e.g., @BeforeEach in JUnit) to avoid repeating the instantiation in every test case, which can improve code maintainability and reduce redundancy.
  • 145-153: Similar to the previous comment, consider using a setup method for initializing shared objects or configurations to improve test code maintainability and readability.
  • 192-200: Again, the repeated instantiation of GetLogbookMessages suggests the potential benefit of refactoring common setup tasks into a @BeforeEach annotated method.
  • 272-280: As with previous instances, moving common setup tasks to a @BeforeEach method could enhance the readability and maintainability of the test suite.
  • 345-353: The repeated pattern of instantiating GetLogbookMessages in each test case reinforces the suggestion to use a setup method for common initializations.
  • 382-390: This is another instance where using a setup method for common test initializations could improve the test suite's structure and reduce redundancy.
backend/src/main/kotlin/fr/gouv/cnsp/monitorfish/infrastructure/database/repositories/JpaLogbookReportRepository.kt (8)
  • 8-17: The addition of new imports such as PriorNotification, LogbookReportFilter, RiskFactorsEntity, EntityManager, and Specification indicates significant enhancements to the repository's functionality, particularly in handling more complex queries and interactions with the database. Ensure these new dependencies are properly managed and that there are no unused imports after the changes.
  • 38-75: The companion object contains specifications for filtering logbook reports based on vessel size and last controlled dates. It's crucial to ensure that these specifications are correctly implemented and tested, especially the logic for determining if a vessel is less than twelve meters. The comment on line 45 suggests a need for clarification or confirmation of the logic used.
#!/bin/bash
# Verify the logic for vessel size filtering is correctly implemented and tested.
echo "Ensure the logic for determining vessel size in withIsLessThanTwelveMetersVessel is correct and confirmed."
  • 280-296: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [248-323]

The methods findLastTripBeforeDateTime, findTripBeforeTripNumber, and findFirstAndLastOperationsDatesOfTrip have been updated to handle exceptions more explicitly and provide more detailed error messages. Ensure that these changes are consistent with the application's overall error handling strategy and that the error messages are informative and useful for debugging.

  • 349-359: Similar to the previous comment, the method findTripAfterTripNumber has been updated for better error handling and messaging. Confirm that these updates align with the application's error handling conventions and that the error messages are clear and helpful.
  • 411-426: The method findLANAndPNOMessagesNotAnalyzedBy introduces logic to filter LAN and PNO messages based on whether they have been corrected. This logic is crucial for ensuring data integrity and accuracy in the application. It's important to verify that this filtering logic is correctly implemented and that it has been thoroughly tested, especially considering the potential complexity of message relationships.
#!/bin/bash
# Verify the filtering logic in findLANAndPNOMessagesNotAnalyzedBy is correctly implemented and tested.
echo "Ensure the filtering logic for LAN and PNO messages in findLANAndPNOMessagesNotAnalyzedBy is accurate and thoroughly tested."
  • 432-434: The method findDistinctPriorNotificationTypes is a straightforward addition that enhances the repository's functionality by allowing the retrieval of unique prior notification types. Ensure that this method is properly tested, especially in scenarios with a large variety of notification types in the database.
  • 436-439: The method updateLogbookMessagesAsProcessedByRule introduces batch processing for marking logbook messages as processed by a specific rule. This is a significant enhancement for performance when dealing with large datasets. Ensure that the batch size (postgresChunkSize) is appropriately configured for the application's database and that this method is thoroughly tested for various batch sizes and datasets.
#!/bin/bash
# Verify the batch processing in updateLogbookMessagesAsProcessedByRule is correctly implemented and tested.
echo "Ensure the batch processing logic in updateLogbookMessagesAsProcessedByRule is efficient and thoroughly tested."
  • 502-505: The private method getCorrectedMessageIfAvailable introduces logic to determine if a PNO message has been corrected. This is an important part of ensuring data accuracy and integrity. It's crucial to verify that this logic is correctly implemented and that it has been thoroughly tested, especially considering the implications of incorrectly identifying corrected messages.
#!/bin/bash
# Verify the logic in getCorrectedMessageIfAvailable is correctly implemented and tested.
echo "Ensure the logic for identifying corrected PNO messages in getCorrectedMessageIfAvailable is accurate and thoroughly tested."
datascience/tests/test_data/remote_database/V666.5__Reset_test_logbook.sql (4)
  • 4-4: Resetting the sequence logbook_report_id_seq is appropriate for test data reset scenarios. Ensure this operation aligns with the intended test setup.
  • 1-2: Unconditional deletion from logbook_reports and logbook_raw_messages is suitable for resetting test environments.
  • 128-157: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [6-153]

Ensure the inserted test data in logbook_raw_messages and logbook_reports aligns with the schema and intended test scenarios. The diversity of data, including both ERS and FLUX message types, is beneficial for comprehensive testing.

  • 131-156: The UPDATE statements modifying the departureDatetimeUtc within the JSONB value column of logbook_reports are crucial for keeping test data relevant. Verify that the JSONB paths and new values are correctly formed and align with the intended data model and test scenarios.
datascience/tests/test_pipeline/test_parsers/test_flux.py (2)
  • 482-484: The addition of catchToLand key-value pairs in the test data for test_batch_parse function is a significant change. It's important to ensure that this new data is correctly handled by the parsing logic and accurately reflects the expected structure of the data being processed. This change likely aligns with the system's enhanced capabilities for managing and consulting prior notifications (PNOs), specifically in relation to tracking catches intended for landing.

Ensure that the parsing logic and any related data processing functionalities are updated and tested to handle this new catchToLand data correctly. It might also be beneficial to add assertions in this test to verify that the catchToLand data is being parsed as expected.

  • 510-512: Similar to the previous comment, the addition of catchToLand key-value pairs in another test case within the test_batch_parse function underscores the importance of this data in the system's enhanced functionality.

As with the previous instance, ensure that the parsing logic is adequately handling this new data. Consider expanding the test coverage to include scenarios where catchToLand data varies in complexity or format to ensure robustness.

datascience/tests/test_pipeline/test_flows/test_enrich_logbook.py (20)
  • 25-25: Mocking the check_flow_not_running task within the flow object is a good practice for isolating the test environment from external dependencies. However, ensure that this mock accurately simulates the behavior of the original task to avoid false positives or negatives in your tests.
  • 28-87: The expected_pno_types fixture provides a comprehensive set of data for testing PNO type extraction. It's well-structured and covers various scenarios, including different PNO types, rules, and associated species and areas. This thorough approach is commendable for ensuring the robustness of the tested functionality.
  • 90-183: The sample_pno_species_and_gears fixture is well-prepared, offering a diverse set of data points for testing the extraction of species and gears from PNOs. It includes various years, species, gears, FAO areas, weights, and flag states, which is excellent for testing the function's ability to handle different data structures and values.
  • 186-201: The expected_pno_species_and_gears fixture is concise and directly related to the test case it supports. It provides clear expected outcomes for the test, which is crucial for validating the correctness of the function under test. This fixture demonstrates good practice in test data preparation.
  • 204-239: The segments fixture is well-designed, offering a variety of segments with associated years, names, gears, FAO areas, and species. This diversity is essential for testing the computation of PNO segments accurately. The fixture's structure facilitates easy understanding and modification, which is beneficial for maintaining the tests.
  • 242-322: The expected_computed_pno_types fixture provides detailed expected results for testing the computation of PNO types. It includes a mix of scenarios with different gears and PNO types, which is excellent for ensuring the function's ability to handle various inputs. The inclusion of None values for certain cases is also a good practice for testing edge cases.
  • 325-349: The expected_computed_pno_segments fixture is well-crafted, offering clear expected outcomes for the computation of PNO segments. It includes scenarios with and without segments, which is crucial for testing the function's robustness. The detailed structure of the expected data aids in validating the correctness of the computation.
  • 352-387: The pnos_to_load fixture is effectively structured, providing a set of PNOs with associated gears, types, and segments for testing the loading function. It includes scenarios with multiple gears and types, which is valuable for testing the function's ability to handle complex data structures. The clear definition of expected inputs facilitates the validation of the loading process.
  • 390-429: The expected_loaded_pnos fixture accurately represents the expected state of PNOs after loading. It includes scenarios with enriched PNOs, various gears, types, and segments, which is essential for verifying the loading function's effectiveness. The detailed expected outcomes are crucial for ensuring the accuracy of the test.
  • 432-530: The expected_merged_pnos fixture is comprehensive, providing detailed expected results for testing the merging of PNO segments and types. It covers a wide range of scenarios, including different gears, types, and segments, which is excellent for ensuring the merge function's capability to handle various data structures. The fixture's structure is clear and facilitates easy comparison with actual results.
  • 533-535: This test function test_extract_pno_types correctly asserts the equality of the extracted PNO types against the expected dataframe. Using pd.testing.assert_frame_equal is the appropriate method for comparing pandas DataFrames, ensuring that the test accurately validates the function's output.
  • 538-547: The test_extract_pno_trips_period function effectively tests the extraction of PNO trips period. The use of the Period class for defining start and end times, along with the correct use of pytz.UTC.localize for timezone awareness, demonstrates attention to detail in handling datetime objects.
  • 550-566: The test_extract_pno_species_and_gears function is well-implemented, testing the extraction of species and gears from PNOs. Sorting the DataFrame before comparison is a good practice to ensure consistent ordering, which is crucial for accurate assertion using pd.testing.assert_frame_equal.
  • 569-573: The test_compute_pno_types function accurately tests the computation of PNO types. It demonstrates good practice by providing a sample input and comparing the computed result against an expected outcome using pd.testing.assert_frame_equal, ensuring the function's correctness.
  • 576-583: The test_compute_pno_types_with_empty_gears_list_only function specifically tests the computation of PNO types when the gears list is empty. This targeted testing of edge cases is commendable, as it ensures the function's robustness in handling various scenarios.
  • 586-593: The test_compute_pno_segments function effectively tests the computation of PNO segments. It provides a clear input and expected output, and uses pd.testing.assert_frame_equal for accurate comparison. This approach ensures the function's ability to correctly compute segments based on the provided data.
  • 596-605: The test_compute_pno_segments_with_empty_gears_only function is a good example of testing specific edge cases, in this case, when the gears list is empty. This attention to detail in testing ensures that the function can handle a variety of input scenarios reliably.
  • 609-615: The test_merge_segments_and_types function correctly tests the merging of computed PNO segments and types. The use of pd.testing.assert_frame_equal for comparing the result with the expected outcome ensures the accuracy of the merge function's implementation.
  • 618-641: The test_load_then_reset_logbook function thoroughly tests the process of loading and then resetting logbook data. It validates the initial state, the state after loading, and the final state after resetting, ensuring the entire process works as expected. This comprehensive approach is crucial for testing functions that modify the database state.
  • 644-738: The test_flow function comprehensively tests the entire data processing flow, including initial state validation, the effect of manual updates, and the impact of re-running the flow with and without reset. This test covers a wide range of scenarios, ensuring the flow's correctness and robustness. The detailed assertions and use of assert_frame_equal for comparison provide a solid basis for validating the flow's functionality.
backend/src/test/kotlin/fr/gouv/cnsp/monitorfish/domain/use_cases/TestUtils.kt (1)
  • 141-141: The addition of the isEnriched property to LogbookMessage instances across various utility functions is consistent and correctly implemented. This property is crucial for distinguishing between enriched and non-enriched logbook messages, which can be essential for testing scenarios that depend on this attribute. However, it's important to ensure that all relevant tests and use cases that utilize these utility functions are updated to account for this new property. Additionally, consider if there are any specific test cases that should be added to specifically test the behavior or processing of enriched vs. non-enriched logbook messages.

Also applies to: 167-167, 193-193, 219-219, 238-238, 257-257, 316-316, 342-342, 368-368, 419-419, 445-445, 503-503, 528-528, 553-553, 578-578, 602-602, 627-627, 725-725, 739-739, 755-755, 769-769, 849-849, 863-863, 879-879, 893-893

backend/src/main/resources/db/testdata/V666.5.0__Insert_logbook_raw_messages_and reports.sql (4)
  • 420-420: The comment explains the rationale behind the change in handling the dimensions field from numeric to string due to the parser's inability to cast certain string values to float. This is a good practice as it provides context for the change and its impact on the data.
  • 609-619: The UPDATE statement dynamically sets the tripStartDate for the operation number 'OOF20191011059902' based on the current UTC time minus 72 hours. This approach ensures that test data remains relevant over time. However, it's important to ensure that this dynamic date setting aligns with the intended test scenarios.
#!/bin/bash
# Verify that the dynamic setting of tripStartDate aligns with test scenarios
echo "Ensure that the dynamic date setting for tripStartDate in operation 'OOF20191011059902' aligns with intended test scenarios."
  • 609-619: Multiple UPDATE statements are used to dynamically set various date fields (predictedArrivalDatetimeUtc, tripStartDate, etc.) based on the current UTC time with specific intervals subtracted. This approach is beneficial for keeping test data up-to-date. However, it's crucial to verify that these dynamic dates do not conflict with each other or with the logic of the application, especially considering the specific intervals used (e.g., '24 hours 30 minutes', '72 hours').
#!/bin/bash
# Verify that dynamic date settings do not conflict with each other or application logic
echo "Check for potential conflicts between dynamic date settings in the test data and application logic."

Also applies to: 679-689, 691-701, 703-713, 715-725

  • 735-756: The UPDATE statement enriches a specific logbook_reports record with detailed information in trip_gears, trip_segments, and pnoTypes. This detailed enrichment provides valuable test data for specific scenarios. It's crucial to verify that the provided values (e.g., gear types, segment names, pnoTypes) are consistent with the application's domain logic and that they cover the intended test cases comprehensively.
#!/bin/bash
# Verify the consistency of provided values in trip_gears, trip_segments, and pnoTypes with application domain logic
echo "Check for consistency of provided values in trip_gears, trip_segments, and pnoTypes with the application's domain logic and ensure they cover intended test cases."

ivangabriele and others added 24 commits March 27, 2024 11:44
@ivangabriele ivangabriele force-pushed the vincent/prior_notification_list branch from 484121c to a4b1258 Compare March 27, 2024 10:44
@ivangabriele ivangabriele force-pushed the vincent/prior_notification_list branch from a4b1258 to 80e90d4 Compare March 27, 2024 10:47
Copy link

sonarcloud bot commented Mar 27, 2024

Quality Gate Passed Quality Gate passed

Issues
0 New issues
1 Accepted issue

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

@ivangabriele ivangabriele merged commit bc01d7f into master Mar 27, 2024
25 checks passed
@ivangabriele ivangabriele deleted the vincent/prior_notification_list branch March 27, 2024 13:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants