A Hardware Acceleration Scheme for Memory-Efficient Flow Processing

Xin Yang, Sakir Sezer, Shane O'Neill

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)
403 Downloads (Pure)

Abstract

This paper presents a hardware solution for network flow processing at full line rate. Advanced memory architecture using DDR3 SDRAMs is proposed to cope with the flow match limitations in packet throughput, number of supported flows and number of packet header fields (or tuples) supported for flow identifications. The described architecture has been prototyped for accommodating 8 million flows, and tested on an FPGA platform achieving a minimum of 70 million lookups per second. This is sufficient to process internet traffic flows at 40 Gigabit Ethernet.
Original languageEnglish
Title of host publication2014 27th IEEE International System-on-Chip Conference (SOCC)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages437-442
Number of pages4
DOIs
Publication statusPublished - 02 Sep 2014
EventIEEE System-on-Chip Conference (SOCC) - Nevada, Las Vegas, United States
Duration: 02 Sep 201405 Sep 2014

Conference

ConferenceIEEE System-on-Chip Conference (SOCC)
CountryUnited States
CityLas Vegas
Period02/09/201405/09/2014

Fingerprint Dive into the research topics of 'A Hardware Acceleration Scheme for Memory-Efficient Flow Processing'. Together they form a unique fingerprint.

  • Cite this

    Yang, X., Sezer, S., & O'Neill, S. (2014). A Hardware Acceleration Scheme for Memory-Efficient Flow Processing. In 2014 27th IEEE International System-on-Chip Conference (SOCC) (pp. 437-442). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/SOCC.2014.6948969