Hands-On Workshop: Build an AI-Powered Personalization Engine With Confluent & Databricks on AWS
Join a hands-on lab where you’ll build a real-time personalization engine using Confluent Cloud, Apache Flink, Databricks, and Amazon Web Services (AWS). Learn how to stream change data from Oracle using Confluent connectors, enrich and process it with Flink, and write it to Delta Lake with Tableflow. Then, use Databricks Genie—powered by Mosaic AI—to generate promotional content and audience segments dynamically based on live booking and review data. Ideal for data engineers, architects, and AI builders looking to put streaming pipelines and foundation models into practice.
Speaker: Mandar Bhoir, Solutions Engineer, Confluent
Why Attend?
- Build streaming data pipelines with fully managed Apache Kafka® and Apache Flink®
 
- Source real-time change data from Oracle and convert processed data streams to Delta tables on Databricks
 
- Generate personalized marketing content using Databricks Genie and foundation models
 
Please ensure you have completed the prerequisites before starting the workshop:
- Sign up for Confluent Cloud
 
- Databricks account and existing workspace (can be a trial account with the default workspace)
 
- AWS CLI installed and authenticated with an AWS account that has permissions to create resources
 
- Terraform installed
 
- Docker Desktop installed
 
- Git installed