Abstract
The increasing pervasion and scale of machine learning tech- nologies is posing fundamental challenges for their realisa- tion. In the main, current algorithms are centralised, with a large number of processing agents, distributed across parallel processing resources, accessing a single, very large data ob- ject. This creates bottlenecks as a result of limited memory access rates. Distributed learning has the potential to resolve this problem by employing networks of co-operating agents each operating on subsets of the data, but as yet their suitabil- ity for realisation on parallel architectures such as multicore are unknown. This paper presents the results of a case study deploying distributed dictionary learning for microarray gene expression bi-clustering on a 16-core Epiphany multicore. It shows that distributed learning approaches can enable near- linear speed-up with the number of processing resources and, via the use of DMA-based communication, a 50% increase in throughput can be enabled.
Original language | English |
---|---|
Title of host publication | Proceedings of the 42nd IEEE Internal Conference on Acoustics, Speech and Signal Processing |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Number of pages | 5 |
ISBN (Electronic) | ISSN: 2379-190X |
ISBN (Print) | 978-1-5090-4117-6 |
DOIs | |
Publication status | Published - 19 Jun 2017 |
Event | IEEE International Conference on Acoustics, Speech, and Signal Processing 2017 - Hilton New Orleans Riverside, New Orleans, United States Duration: 05 Mar 2017 → 09 Mar 2017 http://www.ieee-icassp2017.org/ https://doi.org/10.1109/ICASSP31846.2017 |
Conference
Conference | IEEE International Conference on Acoustics, Speech, and Signal Processing 2017 |
---|---|
Abbreviated title | ICASSP 2017 |
Country/Territory | United States |
City | New Orleans |
Period | 05/03/2017 → 09/03/2017 |
Internet address |