4ec0a23e73
Change-Id: Ic4e43992e1674946cb69e0221659b0261259196c
41 lines
2.9 KiB
Markdown
41 lines
2.9 KiB
Markdown
# EdgeFLite:Edge Federated Learning for Improved Training Efficiency
|
||
|
||
|
||
- EdgeFLite is a cutting-edge framework developed to tackle the memory limitations of federated learning (FL) on edge devices with restricted resources. By partitioning large convolutional neural networks (CNNs) into smaller sub-models and distributing the training across local clients, EdgeFLite ensures efficient learning while maintaining data privacy. Clients in clusters collaborate by sharing learned representations, which are then aggregated by a central server to refine the global model. Experimental results on medical imaging and natural datasets demonstrate that EdgeFLite consistently outperforms other FL frameworks, setting new benchmarks for performance.
|
||
|
||
- Within 6G-enabled mobile edge computing (MEC) networks, EdgeFLite addresses the challenges posed by client diversity and resource constraints. It optimizes local models and resource allocation to improve overall efficiency. Through a detailed convergence analysis, this research establishes a clear relationship between training loss and resource usage. The innovative Intelligent Frequency Band Allocation (IFBA) algorithm minimizes latency and enhances training efficiency by 5-10%, making EdgeFLite a robust solution for improving federated learning across a wide range of edge environments.
|
||
|
||
## Preparation
|
||
### Dataset Setup
|
||
- The CIFAR-10 and CIFAR-100 datasets, both derived from the Tiny Images dataset, will be automatically downloaded. CIFAR-10 includes 60,000 32x32 color images across 10 categories: airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images per category, split into 5,000 for training and 1,000 for testing.
|
||
|
||
- CIFAR-100 is a more complex dataset, featuring 100 categories with fewer images per class compared to CIFAR-10. These datasets serve as standard benchmarks for image classification tasks and provide a robust evaluation environment for machine learning models.
|
||
|
||
### Dependency Installation
|
||
|
||
```bash
|
||
Pytorch 1.10.2
|
||
OpenCV 4.5.5
|
||
```
|
||
|
||
## Running Experiments
|
||
*Top-1 accuracy (%) of FedDCT compared to state-of-the-art FL methods on the CIFAR-10 and CIFAR-100 test datasets.*
|
||
|
||
1. **Specify Experiment Name:**
|
||
Add `--spid` to specify the experiment name in each training script, like this:
|
||
```bash
|
||
python run_gkt.py --is_fed=1 --fixed_cluster=0 --split_factor=1 --num_clusters=20 --num_selected=20 --dataset=cifar10 --num_classes=10 --is_single_branch=0 --is_amp=0 --num_rounds=300 --fed_epochs=1
|
||
```
|
||
|
||
2. **Training Scripts for CIFAR-10:**
|
||
|
||
- **Centralized Training:**
|
||
```bash
|
||
python run_local.py --is_fed=0 --split_factor=1 --dataset=cifar10 --num_classes=10 --is_single_branch=0 --is_amp=0 --epochs=300
|
||
```
|
||
|
||
- **FedDCT:**
|
||
```bash
|
||
python train_EdgeFLite.py --is_fed=1 --fixed_cluster=0 --split_factor=4 --num_clusters=5 --num_selected=5 --dataset=cifar10 --num_classes=10 --is_single_branch=0 --is_amp=0 --num_rounds=300 --fed_epochs=1
|
||
```
|
||
--- |