Features
MLOps and Generative AI Services
MAESTRO provides MLOps capabilities through Amazon SageMaker Studio. Using ML and CI/CD pipelines, agencies can now train, deploy and monitor models automatically at scale with standardised workflows to boost productivity of data science teams while maintaining model performance and quality in production.
With Amazon SageMaker Jumpstart, MAESTRO also provides users (via dedicated instances) with access to a selection of foundation models (including Generative AI models) along with pre-built algorithms which cover common ML tasks, such as data classification (image, text, tabular) and sentiment analysis. Additionally, MAESTRO supports users in hosting quantized models.
As for Gen AI API services, MAESTRO provides AWS Bedrock that is exclusively hosted in Singapore region and allows users to leverage models such as Claude 2 & Claude Instant. In the future, MAESTRO will explore similar Gen AI API services from other Cloud Solution Providers as well such as Azure OpenAI and Google Cloud Platform Gemini Pro. Stay tuned for future updates.
No-code Machine Learning Tool
MAESTRO provides Amazon SageMaker Canvas which offers a no-code interface that allows anyone to create ML models in minutes without any previous experience, using interactive visual interfaces and point and click tools.
Collaboration and Repository
MAESTRO provides GitLab and Nexus Repository for teams and agencies to share and collaborate. Nexus Repository allows users to get the latest versions of open-source Python/R libraries into their JupyterHub & RStudio instances with Analytics.gov.
Integration Services
MAESTRO is integrated with Vault, a central Data Discovery and Distribution Platform for WOG, allowing users to download approved data from Vault by simply running curl commands within JupyterHub and RStudio. With the new data classification and sensitivity support, users can now access Vault datasets containing up to Confidential (Cloud-Eligible), Sensitive High data from MAESTRO.
MAESTRO is also integrated with Cloak, a central privacy toolkit that helps users apply data transformation techniques and PII detection/anonymisation. Users can access Cloak’s APIs and download transformed data directly from MAESTRO’s environment.
Most importantly, MAESTRO is able to establish data connectivity with agencies’ data systems, as we note that this is an important requirement for agencies today in their MLOps projects. The technical mode of connectivity would have to be assessed accordingly on a case by case basis due to the different nature of architectural set-ups across various agency systems.
GPU-as-a-Service
Since the launch of our MLOps services, we have seen a steady increase in user usage of GPU instances due to rising need for computational power to support their AI/ML models on the platform, especially for LLMs training & deployment. In the coming months, we will be providing scalable compute resources to support WOG agencies, and these can be leveraged on our platform as GPU-as-a-Services (e.g. A100, L4, A10g). The growing GPU usage and demand from users also serve as a compelling driver for us to focus our efforts more on our MLOps platform, which allows for the dynamic scaling of infrastructure and compute resources directly from Cloud Solution Providers. Stay tuned for future updates.
Thanks for letting us know that this page is useful for you!
If you've got a moment, please tell us what we did right so that we can do more of it.
Did this page help you? - No
Thanks for letting us know that this page still needs work to be done.
If you've got a moment, please tell us how we can make this page better.
A One-Stop AI/ML Ops Solution That Efficiently Manages AI/ML Model LIfecycle at Scale