Personal supercomputers look suitable for quants

Micro Electronics

by CBR Staff Writer| 18 December 2008

Nvidia and its systems partners are rolling out GPU-based systems for compute-intensive applications in scientific and technological research, which could also be relevant in the financial markets in areas such as quantitative algorithm development.

In terms of general-purpose computing, graphics processing units (GPUs) are suitable for any application that handles large volumes of data and needs interactivity, i.e. the ability to tweak the parameters of a given algorithm from within the data set. Alongside things like seismic interpretation, tomographical research, computational fluid dynamics and medical imaging registration, there is clearly merit in the development of quantitative models to form the basis of trading strategies in the financial markets.

Nividia has been touting its GPU as a platform for non-graphics uses in general-purpose computing for at least five years and, in November 2006, it launched a capability to enable this, called Cuda. This stands for Compute Unified Device Architecture and is a compiler and set of development tools that enable programmers to use a variation of C (i.e. Parallel C) to code algorithms for execution on the GPU.

Since then it has shipped 100 million of its GeForce Series 8 cards into the market, representing the total number of Cuda-enabled platforms currently out there at the moment, though clearly the vast majority of these are being used for graphics processing in things like gaming products.

Among the applications Nvidia targets with its Cuda-enabled GPUs are a number in the financial markets. Nvidia wants to see its processor replacing FPGAs, for instance, in ticker plants, whose vendors are keen to add value beyond feed handling functionality, by carrying out some of the number crunching on market data ahead of delivering it to a trader or an algoserver.

That scenario, however, is essentially one in which the human factor is absent, i.e. the compute-intensive calculations can be programmed into the ticker plant and carried out automatically. The personal supercomputers, i.e. GPU-based workstations, that companies like Dell, HP and Lenovo will be rolling out next year, based on Nvidia's processors, are suitable for environments such as test and development, where a human being, be they a programmer or a scientific researcher, seeks to shorten the development cycle. While many of its target users are in academia or industries such as the oil and gas sector, quantitative algorithm development is clearly also a potential customer for such products.

So-called financial engineers in quantitative analysis, a.k.a. quants, develop algorithms to underscore trading strategies for hedge funds in today's increasingly complex, multi-venue environment, and one of the frustrations in this area is that, while coming up with a big quant model can take anywhere from 10 weeks to seven months, some trading strategies have an active life of just three or four months.

A number of ISVs have sought to address this problem with software products, known collectively as alpha generation platforms, which in essence streamline the development process and thus improve its productivity. Their customers would seem to be natural targets for the new personal supercomputers, which at around $10,000 for 1.8 teraflops on the desktop are a bagatelle for anyone investing in a high-frequency trading strategy.

 

Comments
Post a comment

Comments may be moderated for spam, obscenities or defamation.
Privcy Policy

We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.