Pre-event Workshops
01:30 PM - 03:00 PM
[Workshop A1] English Is The New Programming Language | Prompt Engineering Workshop With LaunchPad
Master how you can speak to AI with us! In this session, you can: Get a sneak preview into the latest AI-empowered functions of GovTech's latest platform, LaunchPad, and how it can empower you in using AI, understand the fundamental principles and concepts of prompt engineering, including the anatomy of an engineered prompt, and apply the CO-STAR methodology to supercharge your prompts, learn how to apply the prompt engineering mindset to create effective prompts for various use cases, be it summarisation, classification, rewriting, generation and more and, develop best practices and learn tips & tricks that helps your prompts stand out from your peers.
Who should attend
Public Officers only
Speaker(s)
03:15 PM - 04:45 PM
[Workshop A2] Unlocking Powers of Computer Vision, Starting from Data (Video/Image) Acquisition
Discover the world of different ways to acquire data. Unlock the full potential of Computer Vision and understand how to extract valuable insights from your data. Join our workshop to learn the latest techniques for data acquisition, deep learning models, and no-code development software. Unlock the power of Computer Vision to transform the way we work, play and live!
Who should attend
Public Officers only
Speaker(s)
01:30 PM - 03:00 PM
[Workshop B1] Vision-and-Language Research and Applications: Towards Universal Multimodal Intelligence
Vision-and-language research has received much success recently, enabling improved performance in many downstream multimodal AI applications. This talk will introduce the efforts from Salesforce Research in advancing state-of-the-art vision-and-language AI from two perspectives: library development and fundamental research. For the library, we introduce LAVIS (4.3k stars), a one-stop solution for vision-language research and applications. LAVIS is a central hub that supports 40+ vision-language models with a unified interface for training and inference. For research, we introduce our line of research work including ALBEF, BLIP, BLIP-2, and the latest InstructBLIP. In particular, this talk will focus on BLIP-2, a generic vision-language pre-training method that enables frozen LLMs to understand images.
Who should attend
Open to all
Speaker(s)
03:15 PM - 04:45 PM
[Workshop B2] Harvest Your Alpaca In Your Environment: LLMs Local Training and Hosting (Advanced For WoG Data Analysts / Data Scientists)
Large Learning Models (LLMs) are powerful and with game-changing capabilities. We will introduce LLMs, their use cases, strengths, and limitations. LLMs come in various sizes and some of them can be trained and hosted locally. We will discuss model selection, emphasising that size isn’t everything, and explore smaller LLMs like LLaMa, Alpaca, Vicuna, and Dolly. We will focus on demonstrating how to effectively finetune a LLM by leveraging libraries such as PEFT and LoRA with a hands-on session.
Who should attend
Open to all
Speaker(s)
Last updated 09 June 2023
Thanks for letting us know that this page is useful for you!
If you've got a moment, please tell us what we did right so that we can do more of it.
Did this page help you? - No
Thanks for letting us know that this page still needs work to be done.
If you've got a moment, please tell us how we can make this page better.