C2DR-VAE: Adaptive Recommendation Framework via Cluster-Conditioned and Dynamically Refined Variational Autoencoders
Abstract
Recommendation systems are crucial to customer interaction in the digital realm and nevertheless, it continues to face the old classic issues of the cold start problem, scalability, and capacity to keep up with rapidly evolving customer preferences. This study presents a new adaptive, scalable, and privacy-aware recommendation system C2DR-VAE (Cluster-Conditioned and Dynamically Refined Variational Auto-Encoder), which combines clustering and generative modeling dynamically. First, the data of user-item interaction is clustered with the K-means to create behaviourally consistent clusters, which also serve as previous knowledge to the VAE. These clusters are optimized upon training through the proposed structure, as opposed to the traditional means of training that employs a fixed preprocessing, by dynamically updating the centres of the clusters at a specified frequency, with the learned latent embeddings. This dynamically refining process will allow the model to dynamically update dynamically changing-user behaviors and learn the representation of the model efficiently. C2DR-VAE addresses issues of cold start sparsity based on a mixture of robust data-driven clustering and the generative capability of VAEs, albeit with high scalability to large data sets and high-quality personalized recommendations with dynamic environments. In order to be robust, the framework is tested with cross-domain transfer experiments across various domains. Evaluation is conducted through an integrated framework that combines standard accuracy and ranking measures, user-centric metrics and system-level performance indicators to comprehensively assess both the scalability and effectiveness of the developed model. The specified framework fosters the studies of the recommendations that offer user-centered, domain-strength and self-enhancing solution.
Downloads
Copyright (c) 2026 ITEGAM-JETIA

This work is licensed under a Creative Commons Attribution 4.0 International License.








