Skip to main content

Audio Interface – Low Latency Performance

This article is a summary of the 3 part series I have posted at my DAWbench Blogsite , the information and collated data is directly related to Windows , but can be used as a reference for OSX as well, to a lesser degree.

There are a wide and varied range of audio interfaces available, all with their strengths in regards to specifications and features. The current market is crowded with new interfaces coming online cramming more and more features at pricing even more affordable than any time in the past , which on the surface seems like a win/win for those looking at dipping their toes into the pond, and for many it is just that.

However for those that place higher demands on their systems, the cracks start appearing very quickly when these interfaces are driven to lower latency, and as that becomes more and more important, a lot of the gloss of the extra features pales when the performance of the drivers do not hold up to the demands of the low latency working environments that many of us require. Now while a lot of these interfaces will perform well for the majority in less critical working environments, the performance variable can become quite dramatic at the lower latencies, and that is what I am specifically going to be focusing on.

Hardware Buffer Setting v Actual Latency :


All audio interfaces have their respective hardware control panels where adjustments to buffer /latency settings can be made. The buffer settings in some case have little correlation to the actual latency achieved due to various factors including – added buffering for both the Input and Output streams , arbitration delays associated to the FPGA and the actual protocol, as well as AD/DA conversions.

Some interfaces even resort to simply reporting nominal figures , which would compromise the ability of the host DAW’s to be able to keep everything in sync due to wide variances between the reported and actual latency.

The variances at respective latency settings is also quite substantial , for example the RTL at a listed 064 setting can vary anywhere from under 3.5ms all the way to over 12ms , so obviously there is no real consistency in regards to what the actual panel setting represents. To say that there is a wide and varied range of reported latencies across the interfaces is an understatement, quite literally, some of the listed panel settings are little more than window dressing.

To help navigate the minefield I collaborated with Andrew Jerrim of Oblique Audio and helped develop a Round Trip Latency Utility which allowed me to measure the actual RTL of each respective interface .

Its also worth noting that simply having the latency value available, doesn’t guaranteed that the interface will work reliably at that latency.


Protocols and Driver Performance – PCIe / Firewire / USB / Thunderbolt :.

There are various protocols that are utilised for connecting digital audio interfaces to computer platforms, PCI/PCie was the more traditional protocol in the earlier days when studio computing was reserved to larger workstations , and was the most consistent for delivery solid and reliable performance over the competing technologies like Firewire and USB. As the technologies evolved and studio/working requirements shifted to needing more portable solutions for laptops , for example, Firewire and USB/USB2 became more and more popular. This allowed manufacturers to offer solutions that could be used across both desktop workstation and mobile solutions equally. This has now evolved to the point where PCIe , despite still being a solid and reliable choice, being overtaken in popularity and available interface options by the mobile focused protocols of FW/USB2/Thunderbolt. How the new preferred protocols fair in regards to comparative low latency performance is the real question, and one that I have spent enormous time and energy investigating over the years.

As the testing pool increased it was becoming quite evident that there was a consistency amongst various interfaces from different manufacturers in regards to not only control panel settings and corresponding latency values, but also the comparative performance of the units . It was pretty obvious we were dealing with not only an identical OEM FW controller , but also a baseline driver.

Trying to get official confirmation of the OEM controllers used was proving difficult so I needed to dig a little deeper with some trusted industry contacts who gave me some clues where to dig and managed to narrow down the OEM FW controllers that were most widely used. The most used FW OEM controller is/was the Dice range from TC Applied Technologies . Some manufacturers using these controllers – TC ( obviously ) , AVID ( Mbox 3 Pro) , M-Audio ( Profire Range ) , Presonus ( Current ), Focusrite , Mackie ( Current Onyx ) , Midas , Allen and Heath, the list goes on . It is worth noting that AVID and M-Audio differed from the other units listed in that they did not use the base OEM driver, instead developing and using their own.

TC Applied was acquired by Behringer in 2015 and have now ceased supply to all 3rd party manufacturers , so all of the interfaces using the Dice solutions are effectively End Of Life.

The second most widely used range of controllers is/was by ArchWave ( Formally BridgeCo) , list of some of the manufacturers using the controllers include – Apogee, Lynx, M-Audio ( Older FW units), MOTU, Presonus ( Older units), Prism and Roland. The 3rd controller is a joint venture developed by Echo and Mackie which was used in the Echo Audiofire line of interfaces , and the earlier Mackie Onyx rack/desktop models.

The OEM USB2 controller most widely used is the XMOS , some manufacturers are using a custom FPGA in combination with other 3rd party USB controllers. Base drivers are provided by numerous 3rd parties, CENtrance, Ploytec, Thesycon, and in short, its an absolute crap shoot. Despite the interfaces using the same OEM controller, performance varies greatly depending on the choice of OEM 3rd party base driver and the added optimizations ( if any ) being deployed by the manufacturers.  3rd party OEM controllers and associated drivers cover a large % of FW/USB audio interfaces available , some exceptions are Steinberg/Yamaha , Roland and RME , who develop not only their own proprietary custom FPGA protocol controllers in house, but also couple that with development of the drivers. RME are a stand out not only in the level of development that they apply at both controller and driver level , but also in the level of performance that they have achieved across all of the available protocols.

The last option for device protocol is the one with the greatest potential to equal PCIe level of performance, which is of course Thunderbolt – i.e. copper version of Lightpeak , which is essential a dual protocol external interconnect with dedicated PCIe x4 and PCIe x16 for DisplayPort. This in theory will allow manufacturers to achieve PCIe LLP from an external interconnect , as well as being backward compatible with all current interconnects running on the PCIe bus- FW and USB2/USB3. Despite the potential advantages it can offer digital audio interface manufacturers , its been a slower road in regards to adoption than many have hoped. MAC only TB interfaces have hit the market and are showing good potential, the adoption on Windows has been a lot slower and we are yet to see anything available apart from Lynx.

There have been numerous theories as to the reason for the slow adoption and its evident that we are not really getting the whole story. There are uploaded videos from an Intel Development Forum in September 2010 ( 6 months before Apple officially released TB ) where there was a proof of concept demo of Lightpeak being used for Professional Digital Audio application by none other than AVID on Protools HD, using a BETA HD I/O on a Compal white box PC Laptop running Windows 7. Videos can be seen Here and Here

In short and from what I have been able to conclude through my own investigation, after the initial exclusivity deal that locked the TB1 protocol to Apple, various other stumbling blocks regards licensing and other overzealous requirements imposed, made it very difficult for developers to invest in R&D. Not to mention Microsofts refusal to “officially” support TB2 on Windows 7/8. That’s not to say it didn’t work, simply that it was not officially supported. Microsoft has now officially supported TB3 only on Windows 10, which is a move in the right direction, but with all the current audio interfaces being TB2 and requiring an additional TB3-TB2 adaptor, and a large % of the end user base not being on Windows 10, I still feel that TB on Windows has a way to go before being more widely accepted on the Windows platform


LLP ( Low Latency Performance ) Rating :

The real question when it comes to low latency performance is not what buffer settings are available in any respective interface control panel , but  whether the available settings are actually usable in Real World working environments.

The delivered I/O and RTL has little relevance if the driver is inefficient at the preferred latencies, it is simply a number that can be quoted and bandied around , but is irrelevant when dealing with the actual end user experience if you cannot realistically use those lowest buffer settings. A perfect example ( and lets remove the 064/128/256 window dressings ) , some interfaces need 9+ ms of playback latency to perform equally to others at 4 ms in respect to number of plugins/polyphony on the benchmarks , this is the reality of the actual overall performance. This has nothing to do with delivered I/O and RTL latency, it is purely the efficiency of the driver evidenced by the fact that it takes more than double the playback latency to deliver equal performance of a better performing driver.  Latency performance is more than just input monitoring / live playing.

With that in mind I decided to develop a rating system that takes into account numerous variables relative to overall low latency performance , a full explanation of how the LLP- Low Latency Performance Rating can be read Here. To summarize, gauged against the reference  RME HDSPe baseline, the average % value across the 3 benchmarks is multiplied by the RTL% value to give the final rating. I think that is a fair appraisal using the collated data, and it gives deserved credit and advantage to those cards that do have lower individual In/Out and Round Trip Latencies.

The charts below give a clearer indication of what I have outlined.



Getting into the Details and Navigating the Curves :

The results clearly show that there are huge variables in not only the dedicated I/O Latency and RTL but also overall performance at the respective buffer settings. Its not as simple as comparing performance results at any given dedicated control panel buffer setting when there are such large variables in play – i.e. RTL for buffer settings of 064 range from 3.5 ms to over 12ms as I noted earlier.

Its also not as clear cut as saying that for example FW interfaces offer better performance than USB 2.0 or vise versa , as there are instances of respective interfaces clearly outperforming others using the various protocols.

The tested PCI/PCIe interfaces do however lead in performance , clearly indicated by the tables above , but the variance is not as large as it once was. The better FW and USB interfaces for example are  not too far off the base reference in both performance , I/O and RTL. Thunderbolt should perform closer to the PCIe interfaces , it will be interesting if they do perform to their potential on Windows when they are finally rolled out.

It is also evident that numerous manufacturers are utilizing not only the identical controllers but also the bundled OEM driver , which for the most part is convenient for them to get products to market, but the products using the above cover a large end user demographic and the drivers are not necessarily the best catch all in some working environments.

Whereas they will be fine for live tracking and in session environments focussing on audio where low latencies are not a priority , once the latencies are dialled down for live Virtual Instrument playing / Guitar Amp Simulators for example, the performance variable at those lower /moderate latencies can be in the vicinity of 40+%. That is a huge loss of potential overhead on any system, let alone current multi core systems.

Conclusion and Final Thoughts :

What I have learned over the course of the last 5 years of sharing my R&D publicly is that Low Latency Performance is something that is not very well understood by a large section of the end users, and the developers of the 3rd party controllers and the manufacturers are all too happy to continue to market and blur the lines when talking about interface latencies.

One typical method is in regards to marketing the interface with good Low Latency , but they are specifically referring to Low Latency Monitoring thru a DSP Mixer Applet / ASIO Direct Monitoring – which is essentially the latency of the AD/DA + a few added samples for arbitration to/from the DSP/FPGA. Are they specifically marketing the interface in a false manner, of course not , as pretty much all interfaces now have a good DSP powered onboard mixing facility, so direct monitoring sans FX is essentially real time. This however has been available since the late 90’s on Windows and isn’t anything particularly new or anything to be hyping about IMO.

Where it gets murkier is when we are dealing with I/O and RTL latencies monitoring thru FX and/or real time playing Virtual instruments, as that is when the efficiency of the controllers/ drivers come into play. Further to that, the actual scaling performance of the above at the respective working latencies is an area that has huge variables as I noted earlier. Now to be clear , I am focussing purely on the area of Low Latency Performance for those needing that facility when using Guitar Amp Simulators and Virtual Instruments. For those needing basic low latency hardware monitoring for tracking audio and have minimal focus on larger FX/MIDI/Virtual Instrument environments , then its not as much as a priority.

If the drivers are stable at any given working latency that allows the end user to track and mix with minimal interruption, then all well and good. The question then is – how many manufacturers using the lesser performing controller/driver options specify the preferred working environments that their interfaces will be best suited ? We all already know the answer to that.

This is an area that some manufacturers and their representatives are not overly happy about having an added focus on. I have had numerous encounters with certain developers and manufacturers who seem confused if not clueless in regards to the whole area of LLP that I have presented, and continue to deliver products on to the market with a narrower focus of the actual end user requirements. I understand that this can be very sensitive and even confronting especially if the developer is being challenged about a poorly performing driver , but I would think it was in their best interest to remain open and communicative , especially when time and energy is being offered to help and improve the driver. I also understand that some do not have the direct resources at their disposal , or their R&D has lead them to a position that is not easily remedied, but there are direct benefits to all involved when better performing drivers are able to be delivered to the end users. My aim in shining a light on this area of driver performance , is to bring a higher focus  to the manufacturers who do make the extra effort , and are duly rewarded.

If you have any further questions on the information in the article, feel free to contact me , details below.

Vin Curigliano
AAVIM Technology
Office : 613 9440 6284
Mobile : 0413 608 728
Email :
Web :


The Heavyweight Bass Producer Forum is up on Facebook – click here to join the discussions

Don’t forget to stop by the Heavyweight Bass Facebook page and give us a LIKE !!