Google Cloud and Hugging Face jointly announced a strategic partnership aimed at accelerating the development of generative AI and machine earning (ML).

This collaboration empowers developers by integrating Google Cloud’s robust infrastructure seamlessly with Hugging Face services. It will facilitate the training and serving of Hugging Face models on Google Cloud.

The partnership aligns with Hugging Face’s commitment to democratize AI and further underscores Google Cloud’s dedication to supporting the development of the open-source AI ecosystem. Google Cloud emerses as a strategic cloud partner for Hugging Face, offering its advanced infrastructure, including compute power, tensor processing units (TPUs), and graphics processing units (GPUs) as the preferred platform for Hugging Face training and inference workloads.

Developers will now have streamlined access to Google Cloud’s AI-optimized infrastructure, featuring TPUs and GPUs. This collaboration facilitates efficient training and serving of open models, enabling developers to build new generative AI applications effortlessly.

Key Collaborative Initiatives for Developers

Google Cloud and Hugging Face will collaborate closely to enhance developers’ capabilities in training and serving large AI models. Key initiatives include:

  1. Vertex AI Integration:
    • Enabling developers to train, tune, and serve Hugging Face models seamlessly with Vertex AI. This integration allows developers to harness Google Cloud’s purpose-built MLOps services for streamlined development of new generative AI applications.
  2. Google Kubernetes Engine (GKE) Support:
    • Facilitating Hugging Face developers to leverage “do it yourself” infrastructure on GKE. This empowers developers to independently manage and scale models using Hugging Face-specific Deep Learning Containers on GKE.
  3. Access to Cloud TPU v5e:
    • Extending access to Cloud TPU v5e for more open-source developers. This version provides up to 2.5x more performance per dollar and up to 1.7x lower latency for inference compared to previous versions.
  4. Future Support for A3 VMs:
    • Introducing future support for A3 VMs, equipped with NVIDIA’s H100 Tensor Core GPUs. This enhancement offers 3x faster training and 10x greater networking bandwidth compared to the prior generation.
  5. Google Cloud Marketplace Integration:
    • Utilizing Google Cloud Marketplace to simplify management and billing for the Hugging Face managed platform. This includes Inference, Endpoints, Spaces, AutoTrain, and other services.

Leadership Perspectives

Thomas Kurian, CEO at Google Cloud, remarked, “This partnership ensures that developers on Hugging Face will have access to Google Cloud’s purpose-built AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications.”

Clement Delangue, CEO of Hugging Face, expressed excitement: “With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools from Google Cloud, including Vertex AI and TPUs.”

Both Vertex AI and Google Kubernetes Engine (GKE) are anticipated to be available as deployment options on the Hugging Face platform in the first half of 2024. This collaborative venture is poised to unlock new possibilities in the realm of generative AI and ML development.

The B2B Marketer Logo
Editor at The B2B Marketer | Website | + posts

The B2B Marketer, the online destination for B2B marketing professionals seeking valuable insights, trends, and resources to drive their marketing strategies and achieve business success.