FAQs | Singapore Government Developer Portal
FAQs
Overview
Features
How It Works
Pricing
Getting Started
Use Cases
Resources
FAQs
Meet The Team

FAQs

What is MAESTRO and is it used for?

MAESTRO (stands for Machine Learning & AI Enterprise-level Secure Tool-Suite for Reliable Operations) serves as WOG secure platform offering powered by Analytics.gov to support agencies’ end-to-end production AI/ML use cases , which includes advanced AI capabilities like Generative AI and Machine Learning Operations (i.e. MLOps).

It also allows for cross agency collaboration via use of various AI/MLOps services and code repositories that are made available through this centralised platform offering.

What classification does MAESTRO support?

As a secured environment, it can now support data up to CONFIDENTIAL (CLOUD-ELIGIBLE), Sensitive HIGH classification (For sensitive HIGH datasets, it is only applicable for system-ingested sensitive HIGH datasets from Agency’s data systems, and it requires API integration with Agency’s data systems).

Who can use MAESTRO?

MAESTRO (powered by Analytics.gov) is a WOG platform open to all Singaporean public service officers with a valid gov.sg email account and non-SE GSIB machine.

To find out more about th onboarding details, please send in your request via this form.

What are the tools and services available in MAESTRO?

The list of services currently under MAESTRO’s Amazon SageMaker (as of June 2024) are:

  1. Amazon SageMaker Studio (MLOps)
  2. Amazon SageMaker Canvas (No-Code AI/ML)
  3. Amazon Bedrock (GenAI API services) – Support up to Confidential (Cloud-Eligible)

There are also MAESTRO additional platform features built on top of cloud native features:

  1. MAESTRO Amazon SageMaker User Portal
  2. MLOps Template for R and Python
  3. SHIPHATS Gitlab Mirroring
  4. GGFU Quantised HuggingFace Model Inference Container
  5. Standardised Data Integrations (e.g. S3)
  6. Service Integrations (GitLab, Nexus Repository, Cloak, Vault, TCS)
  7. Model Inference Integrations (GEN-Routable API Gateway, CStack, APEX)
  8. Container Image Builder
  9. Model Telemetry Dashboard
  10. Cost Utilisation Module (Coming Soon)

By FY2024, as part of building our cross-cloud capabilities, we will also continue to expand our Large Language Model (LLM) API services beyond AWS to bring in Azure OpenAI as well as Vertex AI model APIs, such as Gemini Pro. Please stay tuned for our updates in the near future.

Why should I use MAESTRO and what would be different if I were to set-up my own environment through my agency’s GCC account instead?

Aside from the default solution offerings provided by the native cloud service, MAESTRO’s AI/MLOps services (feat. Amazon SageMaker) aims to provide a more seamless and holistic experience for user agencies through various additional engineering efforts done in-house by MAESTRO’s product team. Since the start of the year, MAESTRO has gone through a series of iterations and experimentations on each stage of the ML lifecycle carried out by Amazon SageMaker, before reaching its final architecture design and implemented version today. The key developments are:

  1. Service availability – MAESTRO’s Amazon SageMaker is hosted in GCC’s intranet-facing environment, hence additional development and testing has been done by MAESTRO’s product team to ensure the services’ viability and availability.
  2. Cross-cloud GenAI Capabilities – With Amazon SageMaker Jumpstart, MAESTRO also provides users (via dedicated instances) with access to a selection of foundation models (which includes Generative AI models) and pre-built algorithms which cover common ML tasks, such as data classification (image, text, tabular) and sentiment analysis. Additionally, AG supports users in hosting quantized models. As for Gen AI API services, MAESTRO provides AWS Bedrock that is exclusively hosted in Singapore region and provides models such as Claude 2 & Claude Instant. MAESTRO will explore similar Gen AI API services from other Cloud Solution Providers as well such as Azure OpenAI and Google Cloud Platform Gemini Pro. Stay tuned for future updates.
  3. WOG AAD Authentication – onboarded users will just have to login via WOG AAD authentication instead of SSO login into the main AWS console.
  4. User Domain Management Portal - this is a completely in-house built front end development with the intent of allowing user project teams to have a portal with dashboards to better manage their projects (members and their access controls, endpoint creations and management, etc) via a user interface.
  5. API Gateway for Amazon SageMaker Endpoints – this gateway is provided for MAESTRO users on the platform’s Amazon SageMaker service so that users can deploy the endpoints to their respective agency’s destination systems.
  6. Security and Permissions configurations – these are already pre-set within MAESTRO’s Amazon SageMaker service to ensure that it meets IM requirements.
  7. Support R Project Templates - this is another completely in-house built development so that user agencies can work on both Python (available by default) and R projects on MAESTRO’s Amazon SageMaker platform.
  8. Access to up-to-date packages and libraries, along with the means to collaborate and share code repositories amongst project team members.
Is there any subscription cost or payment fees to use AG?

There is no cost to users for using AG/MAESTRO until end of FY24.

In the long run, the plan is for MAESTRO to commit to savings plans from CSPs, and these cost savings could be translated back to end users once there is a cost recovery pricing model implemented beyond FY24.

Meanwhile, project teams will be able to track their resource utilisation via the end user portal’s cost module feature (to go-live by Sep 2024).

Was this article useful?

A One-Stop AI/ML Ops Solution That Efficiently Manages AI/ML Model LIfecycle at Scale