Google Cloud announced the general availability of Vertex AI at the Google I/O event. It is a managed machine learning platform that aims to allow organizations to accelerate the deployment and maintenance of artificial intelligence models. According to the announcement, the new solutions required nearly 80% fewer lines of code to train a model versus competitive platforms.
With Vertex AI, data science and ML engineering teams can:
- Access the AI toolkit used internally to power Google that includes computer vision, language, conversation, and structured data, continuously enhanced by Google Research.
- Deploy more, useful AI applications, faster with new MLOps features like Vertex Vizier, which increases the rate of experimentation, the fully managed Vertex Feature Store to help practitioners serve, share, and reuse ML features, and Vertex Experiments to accelerate the deployment of models into production with faster model selection.
- Manage models with confidence by removing the complexity of self-service model maintenance and repeatability with MLOps tools like Vertex Continuous Monitoring and Vertex Pipelines to streamline the end-to-end ML workflow.

Andrew Moore, vice president and general manager, Cloud AI and Industry Solutions, Google Cloud, said,
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create a industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production. We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”