April 28 - Goldman Sachs has stopped employees in its Hong Kong operations from using Anthropic's artificial intelligence models, according to people familiar with the matter. Bank staff were reportedly unable to access the Claude models as of a few weeks ago, a development attributed to four sources.
The action reflects a strict reading by Goldman Sachs of its contract with Anthropic after the bank consulted the company. That review led Goldman to conclude that employees based in Hong Kong should not have access to any Anthropic products, the sources said. The restriction, as reported, applies specifically to Anthropic and does not extend to contracts with other AI vendors such as OpenAI.
An Anthropic spokesperson said its Claude models had never been officially 'supported' in Hong Kong, and declined to comment further on the matter. Goldman Sachs and Anthropic did not immediately respond to requests for comment.
The report noted that while AI models built by U.S. firms such as ChatGPT and Claude are prohibited in mainland China, Hong Kong has largely been outside those prohibitions, with U.S. companies imposing their own usage limits. The decision by Goldman represents a company-specific restriction rather than a change in jurisdictional regulation, according to the available details.
Goldman Sachs' chief information officer, Marco Argenti, said in February that the bank had been collaborating with Anthropic to develop AI-powered agents intended to automate an expanding range of internal functions. The recent access limitation indicates a narrower contractual interpretation for staff based in Hong Kong despite earlier public statements about ongoing development work.
Summary
Goldman Sachs has barred its Hong Kong bankers from using Anthropic's Claude models following a contractual review with Anthropic, with employees unable to access the models for several weeks. Anthropic says Claude was never officially supported in Hong Kong. The restriction does not apply to other AI vendors.
Key points
- Goldman Sachs prevented Hong Kong employees from accessing Anthropic's Claude AI models after a strict contract interpretation following consultation with Anthropic.
- An Anthropic spokesperson said Claude was never officially supported in Hong Kong; the access issue was reported to have affected staff for a few weeks, based on four sources.
- The bank's decision is limited to Anthropic contracts and does not cover relationships with other AI providers such as OpenAI. Sectors affected include financial services, technology, and corporate IT operations that rely on third-party AI tools.
Risks and uncertainties
- Uncertainty over how contractual terms with AI vendors will be interpreted and enforced by global firms - this affects corporate procurement and IT governance in finance and technology sectors.
- Potential operational impacts for teams in Hong Kong that had been using or evaluating Anthropic tools - these impacts are centered on internal automation and productivity projects.
- Ambiguity about regional support and formal availability of specific AI models in Hong Kong, as indicated by Anthropic's statement that Claude was never officially supported there.