-
Notifications
You must be signed in to change notification settings - Fork 28
Read Zarr Mesh Files #8682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Read Zarr Mesh Files #8682
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (8)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (2)
13-13
: Consider using binary kilobyte conversion for accuracy.The current implementation uses decimal kilobyte conversion (division by 1000), but binary conversion (division by 1024) might be more appropriate for memory-related calculations.
- val maxSizeKiloBytes = Math.floor(config.Datastore.Cache.ImageArrayChunks.maxSizeBytes.toDouble / 1000.0).toInt + val maxSizeKiloBytes = Math.floor(config.Datastore.Cache.ImageArrayChunks.maxSizeBytes.toDouble / 1024.0).toInt
15-20
: Improve weight function robustness.The weight function could be enhanced for better accuracy and error handling:
- Very small arrays might return 0 weight due to integer division
- No error handling for potential exceptions when accessing array size
def cacheWeight(key: String, arrayBox: Box[MultiArray]): Int = arrayBox match { case Full(array) => - (array.getSizeBytes / 1000L).toInt + try { + Math.max(1, (array.getSizeBytes / 1024L).toInt) // Ensure minimum weight of 1 + } catch { + case _: Exception => 1 // Fallback weight for error cases + } case _ => 0 }webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/RemoteSourceDescriptorService.scala (2)
40-40
: Clarify the comment about attachment path resolution.The comment states that "attachment paths are already resolved with baseDir in local case" but it's unclear what this means in practice. Consider expanding the comment to explain when and how this resolution happens.
41-46
: Consider explicit error handling for credential lookup.Using
.shiftBox
will convert Fox failures to None, potentially hiding important error information. Consider handling credential lookup failures more explicitly to provide better error messages.def vaultPathFor(attachment: LayerAttachment)(implicit ec: ExecutionContext): Fox[VaultPath] = for { - credentialBox <- credentialFor(attachment).shiftBox - remoteSourceDescriptor = RemoteSourceDescriptor(attachment.path, credentialBox.toOption) + credential <- credentialFor(attachment).futureBox.flatMap { + case Full(cred) => Fox.successful(Some(cred)) + case Empty => Fox.successful(None) + case failure => Fox.failure(s"Failed to retrieve credentials for attachment ${attachment.name}: ${failure}") + } + remoteSourceDescriptor = RemoteSourceDescriptor(attachment.path, credential) vaultPath <- dataVaultService.getVaultPath(remoteSourceDescriptor) } yield vaultPathwebknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/Hdf5MeshFileService.scala (2)
28-30
: Improve null handling clarity.The pattern
.toOption.flatMap(value => Option(value))
is unclear. Consider a more explicit approach.- .toOption - .flatMap { value => - Option(value) // catch null - } + .toOption + .flatMap(Option(_)) // Convert null to None
93-95
: Add documentation for complex position calculation.The global position calculation involves multiple transformations. Please add a comment explaining the formula and units.
override def computeGlobalPosition(segmentInfo: NeuroglancerSegmentManifest, lod: Int, lodScaleMultiplier: Double, currentChunk: Int): Vec3Float = + // Global position = origin + (chunk_position * chunk_shape * 2^lod * lod_scale * lod_scale_multiplier) + // This transforms from chunk coordinates to global mesh coordinates segmentInfo.gridOrigin + segmentInfo.chunkPositions(lod)(currentChunk).toVec3Float * segmentInfo.chunkShape * Math .pow(2, lod) * segmentInfo.lodScales(lod) * lodScaleMultiplierwebknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala (1)
128-128
: Use standard Fox pattern for Instant creation.The
Instant.nowFox
pattern is non-standard. Consider using the conventional approach.- before <- Instant.nowFox + before = Instant.nowwebknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
73-74
: Consider adding cache size limits.The caches appear to be unbounded, which could lead to memory issues with many mesh files. Consider adding size limits similar to the 100-entry limit in
NeuroglancerPrecomputedMeshFileService
.- private lazy val openArraysCache = AlfuCache[(MeshFileKey, String), DatasetArray]() - private lazy val attributesCache = AlfuCache[MeshFileKey, MeshFileAttributes]() + private lazy val openArraysCache = AlfuCache[(MeshFileKey, String), DatasetArray](maxCapacity = 50) + private lazy val attributesCache = AlfuCache[MeshFileKey, MeshFileAttributes](maxCapacity = 100)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
app/controllers/WKRemoteWorkerController.scala
(1 hunks)app/models/organization/CreditTransactionService.scala
(2 hunks)conf/evolutions/135-neuroglancer-attachment.sql
(1 hunks)conf/evolutions/reversions/135-neuroglancer-attachment.sql
(1 hunks)conf/messages
(2 hunks)frontend/javascripts/admin/api/mesh.ts
(3 hunks)frontend/javascripts/types/api_types.ts
(1 hunks)frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts
(1 hunks)frontend/javascripts/viewer/view/action-bar/create_animation_modal.tsx
(1 hunks)tools/postgres/schema.sql
(2 hunks)unreleased_changes/8682.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala
(5 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/NgffZarr3GroupHeader.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/NgffV0_5Explorer.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/PrecomputedExplorer.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AgglomerateService.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteTracingstoreClient.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrAgglomerateService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala
(5 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/Hdf5MeshFileService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/DataVaultService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/Hdf5FileCache.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/RemoteSourceDescriptorService.scala
(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingZarrStreamingController.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/IdWithBoolUtils.scala
(0 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala
(1 hunks)
💤 Files with no reviewable changes (1)
- webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/IdWithBoolUtils.scala
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
🔇 Additional comments (54)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/ZarrStreamingController.scala (2)
16-16
: LGTM: Import updated for refactored group header.The import change from
Zarr3GroupHeader
toNgffZarr3GroupHeader
aligns with the codebase-wide refactor to enforce mandatory NGFF metadata.
86-86
: LGTM: Consistent usage of new group header with mandatory metadata.The updates correctly instantiate
NgffZarr3GroupHeader
with direct NGFF metadata parameter instead of wrapped inSome()
, reflecting the change from optional to mandatory metadata field.Also applies to: 135-135
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/VolumeTracingZarrStreamingController.scala (2)
27-27
: LGTM: Import statement updated for refactored group header.The import change aligns with the codebase-wide refactor to use
NgffZarr3GroupHeader
with mandatory NGFF metadata.
260-260
: LGTM: Correct instantiation with mandatory metadata.The update properly instantiates
NgffZarr3GroupHeader
with direct metadata parameter, correctly adapting to the new mandatory field structure.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DSRemoteTracingstoreClient.scala (2)
8-8
: LGTM: Import updated for refactored group header type.The import change correctly reflects the transition to
NgffZarr3GroupHeader
.
54-57
: LGTM: Method signature and implementation updated consistently.The method signature and JSON deserialization are correctly updated to work with
NgffZarr3GroupHeader
, maintaining type safety and consistency with the refactor.webknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/NgffV0_5Explorer.scala (2)
10-10
: LGTM: Import updated for refactored group header.The import change correctly reflects the use of
NgffZarr3GroupHeader
.
29-30
: LGTM: Consistent adaptation to mandatory NGFF metadata.The changes correctly access
groupHeader.ngffMetadata
directly, properly adapting to the new mandatory field structure. All usages are consistently updated throughout the file.Also applies to: 34-41, 136-139
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/NgffZarr3GroupHeader.scala (2)
6-10
: LGTM: Case class refactored to enforce mandatory NGFF metadata.The renaming to
NgffZarr3GroupHeader
and changingngffMetadata
from optional to mandatory correctly enforces the presence of NGFF metadata, improving type safety and API clarity.
12-35
: LGTM: JSON format properly updated for mandatory metadata.The JSON serialization and deserialization correctly handle the mandatory
ngffMetadata
field and maintain the proper JSON structure with metadata under"attributes"."ome"
path. The format is well-implemented and consistent.webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/DataVaultService.scala (1)
38-40
: LGTM! Clean delegation pattern.The new overloaded method follows good design principles by extracting the path from LayerAttachment and delegating to the existing implementation. This maintains consistency and avoids code duplication.
conf/evolutions/reversions/135-neuroglancer-attachment.sql (1)
1-1
: Correct approach for PostgreSQL enum reversions.The explanatory comment is appropriate since PostgreSQL doesn't support direct removal of enum values. This follows standard database migration practices.
unreleased_changes/8682.md (1)
1-5
: Well-documented feature addition.The changelog entry clearly documents the new zarr3-based precomputed mesh support and properly references the associated database evolution. The format follows standard changelog conventions.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshMappingHelper.scala (1)
32-32
: Good consistency improvements in comments.The standardization of "meshFile" capitalization in comments improves code consistency and aligns with the broader mesh file handling refactoring.
Also applies to: 35-35, 38-38
tools/postgres/schema.sql (2)
24-24
: Schema version increment looks correct.The schema version update from 134 to 135 properly reflects the database changes in this migration.
166-166
: Addition of neuroglancerPrecomputed enum value is appropriate.This change enables the system to recognize and handle neuroglancer precomputed mesh attachments, which aligns with the PR objectives for supporting multiple mesh file formats.
frontend/javascripts/viewer/model/sagas/meshes/precomputed_mesh_saga.ts (1)
368-368
: API parameter change aligns with backend refactoring.The change from passing the full
meshFile
object to justmeshFile.name
asmeshFileName
correctly aligns with the backend refactoring that now usesMeshFileKey
abstraction. This simplification is consistent with the unified mesh file handling approach.frontend/javascripts/viewer/view/action-bar/create_animation_modal.tsx (1)
261-261
: Improved magnification retrieval with proper error handling.The change from direct array indexing to
getMagByIndexOrThrow
is a good improvement that adds bounds checking and proper error handling. This aligns with the standardization of magnification handling across the codebase.conf/evolutions/135-neuroglancer-attachment.sql (1)
1-9
: Well-structured database evolution script.The evolution script follows PostgreSQL best practices:
- Proper transaction handling ensures atomicity
- Schema version assertion (line 3) prevents out-of-order application
- Correct syntax for adding enum values (line 5)
- Schema version update maintains consistency (line 7)
The addition of
'neuroglancerPrecomputed'
to the enum enables support for the new mesh attachment format as intended.conf/messages (1)
268-270
: LGTM: Well-structured error messages for mesh file operations.The new error message keys follow the established naming convention and provide clear, localized feedback for mesh file operations that can fail. Including the file name in the error messages will help users identify the specific file causing issues.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DataSourceController.scala (2)
268-268
: LGTM: Method name clarification.The rename from
listAgglomerates
tolistAgglomeratesFiles
provides better clarity about what the method returns (file listings rather than agglomerate data itself).
453-454
: LGTM: Improved API consistency with structured parameters.Using
DataSourceId(organizationId, datasetDirectoryName)
instead of separate string parameters provides better type safety and consistency with the mesh file service's updated interface.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala (1)
369-370
: LGTM: Comprehensive cache invalidation for attachments.The addition of attachment-based cache clearing ensures that vault cache entries for all layer attachments (meshes, agglomerates, etc.) are properly invalidated alongside the existing magnification-based cache clearing. This provides more thorough cache management.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerMeshHelper.scala (1)
110-110
: LGTM: Improved parameter naming for clarity.The rename from
encoding
tomeshFormat
provides better semantic clarity about what the parameter represents, aligning with the mesh file format terminology used throughout the refactored mesh services.Also applies to: 115-115
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala (3)
18-20
: LGTM: Convenient aggregation method for all attachments.The
allAttachments
method provides a clean way to access all attachment types in a single collection, which is useful for operations like cache clearing that need to process all attachments uniformly.
29-29
: LGTM: New enum value for neuroglancer precomputed format.Adding
neuroglancerPrecomputed
to theLayerAttachmentDataformat
enum aligns with the PR objectives to support reading mesh files in this format.
37-40
: LGTM: Credential support for remote attachments.Adding the optional
credentialId
field with a sensible default enables credential-aware handling of layer attachments, supporting secure access to remote mesh files while maintaining backward compatibility.app/controllers/WKRemoteWorkerController.scala (1)
82-82
: Excellent error handling improvements!The explicit error mapping for job status updates and credit transaction operations significantly improves error traceability. The specific error messages (
"job.updateStatus.failed"
,"job.creditTransaction.failed"
,"job.creditTransaction.refund.failed"
) will make debugging much easier when these operations fail.Also applies to: 87-88, 92-93
app/models/organization/CreditTransactionService.scala (1)
7-7
: Robust transaction handling with proper null-safety!The Box pattern matching approach elegantly handles cases where jobs may not have associated credit transactions. The explicit handling of
Empty
cases (transaction-less jobs) prevents potential runtime exceptions while preserving the original business logic for successful cases.Also applies to: 46-56, 61-70
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/volume/TSFullMeshService.scala (1)
45-45
: Good naming consistency improvement!The rename from
loadFullMeshFromMeshfile
toloadFullMeshFromMeshFile
follows proper camelCase conventions and aligns with the broader mesh file handling refactoring.Also applies to: 49-49
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)
9-9
: Well-structured service architecture expansion!The addition of format-specific mesh file services (Zarr, HDF5) and agglomerate services demonstrates a clean separation of concerns. Binding these as eager singletons ensures they're ready for use at startup, which is appropriate for core datastore functionality.
Also applies to: 11-12, 36-43
frontend/javascripts/admin/api/mesh.ts (1)
25-25
: Excellent API payload optimization!Simplifying the request payloads to use only
meshFileName
instead of the fullAPIMeshFileInfo
object reduces data transfer and aligns perfectly with the backend's new key-based mesh file handling approach. This change maintains functionality while improving efficiency.Also applies to: 55-55, 75-75
webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/Hdf5FileCache.scala (1)
14-14
: LGTM! Well-structured enhancements for mesh file support.The additions properly extend the HDF5 cache functionality:
- New lazy attributes for reading mesh-specific metadata from HDF5 files
- Clean overloaded method that accepts
LayerAttachment
for better API consistencyAlso applies to: 39-48, 97-98
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ZarrAgglomerateService.scala (1)
21-27
: Good dependency injection refactoring!The changes properly implement dependency injection:
- Clean constructor injection with
@Inject
annotation- Appropriate use of the new
RemoteSourceDescriptorService
for vault path resolution- Proper integration with the shared
ChunkCacheService
Also applies to: 60-60, 68-68
frontend/javascripts/types/api_types.ts (1)
1006-1013
: Clear documentation of mesh format version support.The updated comment provides explicit version support information, making it easier for developers to understand which formats are supported. The removal of
path
andfileType
properties aligns with the backend's simplified mesh file identification approach.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AgglomerateService.scala (2)
26-26
: Good refactoring with improved clarity!The dependency injection setup with
@Inject
is properly implemented, and the method/variable renames improve code readability.Also applies to: 30-33, 43-43
62-75
: Clear parameter naming for optional values.The rename from
layerName
tolayerNameOpt
better indicates the optional nature of the parameter, and the cache clearing predicates are correctly updated.webknossos-datastore/app/com/scalableminds/webknossos/datastore/explore/PrecomputedExplorer.scala (2)
16-23
: Good implementation of mesh attachment support!The mesh exploration logic is well-structured with proper error handling that gracefully falls back to empty attachments. The transform validation ensures the expected 3x4 matrix format.
Also applies to: 54-56, 93-104
54-54
: Verify null safety for meshPath.The code uses
remotePath / precomputedHeader.meshPath
without checking ifmeshPath
exists. IfmeshPath
is null or undefined, this could cause issues.Consider adding a null/undefined check:
- meshAttachments <- exploreMeshesForLayer(remotePath / precomputedHeader.meshPath, credentialId) + meshAttachments <- precomputedHeader.meshPath match { + case Some(path) => exploreMeshesForLayer(remotePath / path, credentialId) + case None => Fox.successful(Seq.empty) + }Or ensure that
PrecomputedHeader.meshPath
is properly typed asOption[String]
to enforce compile-time safety.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala (1)
18-32
: Clean dependency injection refactoring!The change from internal instantiation to constructor injection of
ChunkCacheService
andAgglomerateService
improves testability and follows the IoC principle. The singleton behavior is properly maintained as documented in the comments.webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/RemoteSourceDescriptorService.scala (1)
151-151
: ```shell
#!/bin/bashShow the LayerAttachment definition to confirm the type of
path
rg -C 3 "case class LayerAttachment" webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DatasetLayerAttachments.scala
Show the
findGlobalCredentialFor
method signature and its usage contextrg -C 5 "findGlobalCredentialFor" webknossos-datastore/app/com/scalableminds/webknossos/datastore/storage/RemoteSourceDescriptorService.scala
</details> <details> <summary>webknossos-datastore/app/com/scalableminds/webknossos/datastore/controllers/DSMeshController.scala (1)</summary> `33-98`: **Clean controller refactoring to use unified mesh file abstraction!** The transition from direct file references to using `MeshFileKey` through `meshFileService.lookUpMeshFile` provides better encapsulation and supports multiple mesh formats consistently. </details> <details> <summary>webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala (5)</summary> `26-29`: **LGTM! Clean refactoring of constructor.** The simplified constructor that only takes `DataVaultService` is a good improvement, removing the unused `DataStoreConfig` dependency. --- `31-31`: **Good cache key refactoring.** The change from path-based to `MeshFileKey`-based caching aligns well with the structured mesh file identification approach. --- `33-39`: **Well-adapted method to use MeshFileKey.** The method properly retrieves the vault path from the attachment and includes good validation of the transform array length. --- `75-85`: **Correctly updated to use MeshFileKey pattern.** The method properly retrieves the vault path and uses the cache effectively. The chunk scale calculation based on vertex quantization bits is appropriate. --- `123-126`: **Well-implemented cache clearing method.** The filtering logic correctly handles both required dataSourceId matching and optional layerName filtering. Good addition for cache management. </details> <details> <summary>webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (4)</summary> `20-62`: **Well-designed data model with proper JSON parsing.** The `MeshFileAttributes` case class appropriately extends `Hdf5HashedArrayUtils` for hash function support, and the custom JSON reader correctly handles the nested structure of zarr.json files. --- `107-125`: **Solid implementation of mesh chunk listing.** The method properly retrieves segment manifest data using hash-based lookups and safely converts the byte data. Good error handling with `tryo`. --- `206-226`: **Excellent disk access optimization!** The sorting by byte offset before reading and then restoring the original order is a smart optimization for spinning disk access patterns. This can significantly improve performance when reading multiple non-sequential chunks. --- `228-237`: **Comprehensive cache clearing implementation.** Properly clears both caches and correctly handles the compound key structure in `openArraysCache`. The filtering logic is consistent with other services. </details> <details> <summary>webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala (3)</summary> `169-174`: **Clarify the default quantization bits value.** The method returns 0 for non-neuroglancer formats. Is 0 the correct default value indicating "no quantization", or should this throw an error/return an Option type for unsupported formats? --- `176-196`: **Clean delegation pattern implementation.** All methods consistently delegate to the appropriate format-specific service based on the attachment's data format. The Fox conversions for HDF5 methods are handled correctly. --- `198-211`: **Comprehensive cache clearing across all services.** The method properly clears the local cache and delegates to all format-specific services, correctly aggregating the total count of cleared entries. </details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
...datastore/app/com/scalableminds/webknossos/datastore/services/mesh/Hdf5MeshFileService.scala
Show resolved
Hide resolved
...datastore/app/com/scalableminds/webknossos/datastore/services/mesh/Hdf5MeshFileService.scala
Show resolved
Hide resolved
...s-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala
Show resolved
Hide resolved
...sos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala
Outdated
Show resolved
Hide resolved
...sos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala
Outdated
Show resolved
Hide resolved
Hej, I have a high level question regarding:
Does this mean, that the frontend no longer needs to pay attention to the meshfile type? Or is this still relevant and returned by the backend when the meshfiles are requested? |
Correct! And because of that, I removed fileType and path from the ApiMeshFileInfo. formatVersion remains (but only to warn that meshfiles with version <3 are not supported anymore) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are my first comments :)
...ossos-datastore/app/com/scalableminds/webknossos/datastore/explore/PrecomputedExplorer.scala
Outdated
Show resolved
Hide resolved
...s-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/DSFullMeshService.scala
Outdated
Show resolved
Hide resolved
...sos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala
Outdated
Show resolved
Hide resolved
...sos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/MeshFileService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the rest of the review :)
Thanks for implementing all this so fast 🏎️ 💨
...-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala
Outdated
Show resolved
Hide resolved
...nossos-datastore/app/com/scalableminds/webknossos/datastore/services/DataSourceService.scala
Show resolved
Hide resolved
...-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala
Show resolved
Hide resolved
…e/services/BinaryDataServiceHolder.scala Co-authored-by: MichaelBuessemeyer <39529669+MichaelBuessemeyer@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the rest of the review :)
Thanks for implementing all this so fast 🏎️ 💨
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are my testing results:
- I can no longer explore the data gs://fafb-ffn1-20190805/segmentation/ which I think has precomputed meshes.On master (wk.org) this worked. So I thikn this is cause by this PR,
- The rest works very nicely (hdf5 mesh, ad hoc mesh, zarr mesh)
Similar to Read Zarr Agglomerate Files #8633 we now support reading mesh files in zarr format. Note that there are also neuroglancerPrecomputed meshes, so this time there are three services the meshFileService delegates to.
The frontend no longer needs to specially handle neuroglancerPrecomputed meshes, since the lookUpMeshFileKey function abstracts from that. Because of that, I also simplified the frontend types.
Note that zarr meshfiles must be registered in the datasource-properties.json, they are not explored.
Also fixed a small error in job result status reporting
Also fixed a small error in mag selection for animation job.
Steps to test:
TODOs:
Issues: