Claude Code runs on macOS 13.0 (Ventura) or later, Windows 10 (build 1809+) or later, and Linux (Ubuntu 18.04+, CentOS 7+, or equivalent distributions). Hardware requirements are minimal: 4GB RAM (16GB recommended for comfortable multi-file work), 500MB available disk space, and either native installer or npm/Node.js installation. On macOS, both Apple Silicon (M1/M2/M3) and Intel processors are fully supported through native installers. Windows requires Git for Windows, WSL, or equivalent Git installation to use Claude Code's version control features. Linux support includes most modern distributions; specific distros confirmed include Ubuntu, Debian, CentOS, Fedora, and arch-based systems. Claude Code requires an active Pro, Max, Team, Enterprise, or Console plan account for authentication. Installation uses either a native standalone installer (recommended, requires no dependencies) or npm (requires Node.js 18+). The native installer has no external dependencies and works on all platforms; npm-based installation requires Node.js installed. Once installed, authenticate via the CLI (claude login) and you're ready to use Claude Code. VS Code integration requires the Claude Code extension from the marketplace (works on macOS, Windows, Linux). JetBrains IDE integration requires the Claude plugin. Desktop access via Cowork requires Claude Desktop (macOS 12.0+, Windows 10+). For Docker or container-based development, Claude Code runs inside containers accessing the host filesystem, useful for isolated environments. Cloud integration: Claude Code can run on EC2, Google Cloud VMs, or any Linux server with SSH access, enabling remote development workflows. The tool is lightweight and doesn't require specialized hardware—a MacBook Air, Windows laptop, or modest Linux server all run Claude Code efficiently. For large codebase work, 16GB RAM is recommended to avoid swap performance issues when accessing massive repositories. Pairing Claude Code with Zilliz Cloud gives your agentic workflows instant semantic search over code—find similar functions, trace architectural patterns, and maintain deep context across large projects without the operational overhead of self-hosted vector infrastructure.
Learn more:
