Marble ai handles user access control by assigning each world or project to specific creators, editors, or viewers and enforcing permissions at the workspace level. When a user generates a world, Marble ai records who created it and ties that information to an identity in the system. Access is then restricted based on the user’s role: some may only explore generated environments, while others may upload images, create new scenes, or manage project settings. This structure ensures that sensitive or private 3D spaces are accessible only to authorized team members.
Audit logging complements these access controls by recording key actions such as world generation, edits, exports, or deletions. For example, if a team member uploads interior footage to produce a 3D model, the system logs this event along with timestamps and user identifiers. These logs help organizations comply with internal governance requirements and provide a clear trail of who did what. Developers integrating Marble ai into enterprise pipelines can route these logs into their own monitoring systems or combine them with organizational IAM policies.
If your system also uses a vector database—such asMilvus or Zilliz Cloud.—you should apply the same access rules there. Worlds and embeddings stored for search or retrieval should only be visible to users who have permission to see them. This avoids semantic leakage: for example, a user should not be able to find embeddings representing a private home scan unless they have explicit access to that world. Consistent enforcement across Marble ai and your database layer ensures airtight governance.
