Zhang, Junshan

Objectives

The past few years have witnessed an explosive growth of Internet of Things (IoT) devices. An earlier report by Cisco predicted that there will be 50 billion connected devices by 2020[1], and the number of connected things (including machines, humans, and things) has potential to grow to 500 billions by 2025[2]. A general consensus is that a high percentage of IoT-created data will have to be stored and analyzed upon close to, or at the network edge[3]. This has given rise to a new computing paradigm, namely edge computing, which introduces a new architecture by extending cloud computing to the edge of the network so that ultra-low latency can be achieved at the edge.

It can be observed that many edge networks feature edge devices connected to each other or to a central node wirelessly. The unreliable nature of wireless connectivity, together with constraints in transmission power, bandwidth and computational resources at edge devices, puts forth a significant challenge for the computation, communication and coordination required to learn an accurate model at the network edge. Aiming to tackle this fundamental challenge, this project takes a principled approach to develop an integrated wireless edge learning framework, while taking into account the limitations in edge computing resources and communications in a holistic manner.

Tasks

  • Task 1: Bandlimited Coordinate Descent: Learning-driven Joint Power Allocation and Gradient Estimation.
  • Task 2: Bandlimited Gradient Sketching: Power Control and Bandwidth-Accuracy Tradeoffs.
  • Task 3: Bandlimited Coordinate Descent based on Zero-order Estimates.
  • Task 4: Bandlimited Gradient Sketching based on Zero-order Methods.
  • Task 5: Exploiting RF Characteristics in DL-Based CSI Estimation.

Evaluation and Infrastructure:

  • Datasets:

    We plan to generate synthetic datasets and to use open database such as MNIST (handwritten digits), CIFAR-10, and ImageNet (images), that are widely adopted in machine learning applications. In order to demonstrate the bandwidth efficiency of our sketching techniques in structured scenarios, we will evaluate algorithms on datasets such as the Criteo 1 TB dataset, and DNA metagenomics dataset, following, e.g., PI Dasarathy’s work[4].

  • Prototype and Experiments:

    Given the theoretic nature of the proposed research, instead of building a full-fledged prototype, we will implement the proposed algorithms for integrated ML training and wireless networks, using Coral-Dev Boards which include multiple Google Edge TPUs developed to run AI operations at the edge and a GPU server. We will emulate wireless link effects using DSP board in over-the-air computation. For the learning model, we plan to adopt convex ML models such as SVM for assessing the proposed solutions with convergence guarantees on convex objectives. We will evaluate more sophisticated ML problems on the real-world datasets.

References

[1] D. Evans, “The internet of things: How the next evolution of the internet is changing every-thing,” Cisco White Paper, 2011.

[2] J. Camhi, “Former cisco ceo John Chambers predicts 500 billion connected devices by 2025,” Business Insider, 2015.

[3] C. MacGillivray, V. Turnera, R. Clarke, J. Feblowitz, K. Knickle, L.Lamy, M. Xiang,A. Siviero, and M. Cansfield, “IDC futurescape: Worldwide internet of things 2016 predic-tions,” IDC FutureScape, 2015.

[4] A. Aghazadeh, R. Spring, D. Lejeune, G. Dasarathy, A. Shrivastava, and Richard Baraniuk,“MISSION: Ultra large-scale feature selection using count-sketches,” in Proceedings ofthe 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80. Stockholm Sweden: PMLR, 10–15 Jul 2018, pp. 80–88. [Online]. Available: http://proceedings.mlr.press/v80/aghazadeh18a.html .