Federated learning (FL) has emerged as a promising approach for collaboratively training global models and classifiers without sharing private data. However, existing studies primarily focus on distinct methodologies for typical and personalized FL (tFL and pFL), representing a challenge in exploring cross-applicable training methods. Moreover, previous approaches often rely on data and feature augmentation branches, overlooking data-quantity considerations, leading to suboptimal performance and inefficient communication costs, particularly in multi-class classification tasks. To address these challenges, we propose a novel add-on regularization technique for existing FL methods, named Data-quantity Aware Regularization (FedDAR), seamlessly integrating with existing tFL and pFL frameworks. This network-agnostic methodology reformulates the local training procedure by incorporating two crucial components: 1) enriched-feature augmentation, where features of the local model are coordinated with pre-initialized features to ensure unbiased-representations with efficient global communication rounds for unbalanced data distribution, and 2) data-quantity aware branch, which associates with local data size to improve the optimization of the local model using both supervised and self-supervised labels. We demonstrate significant performance improvements in tFL and pFL, achieving state-of-the-art results across MNIST, F-MNIST, CIFAR-10/100, and Tiny-ImageNet benchmarks.