Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
V12 030 Parallelism

V12 030 Parallelism

Ratings: (0)|Views: 80|Likes:
Published by tabbforum
There is an insatiable appetite for performance in capital markets. Not just investment performance, but computational performance, as well – and more so with each passing day. Buy side, sell side, intermediary or vendor – and no matter the nature of the underlying strategies or mix of services – all of these firms will compete increasingly on output from high-performance computing (HPC) infrastructure. The ability to perform increasingly complex calculations on increasingly complex data flows, at higher update frequencies, is growing in today’s pantheon of competitive necessities. Navigating global markets no longer can be conducted with the low precision of overnight batch runs; it now demands the enhanced precision of intraday, near-real-time, and real-time calculations.

Parallelism – a topic that will be new to many, but one that TABB Group believes will quickly become a part of common technology vernacular in our business – offers a significant key to HPC challenges. On the backs of increasingly parallel storage, compute, and network architectures, specifically developed (and re-developed) software can now deliver performance that is orders of magnitude greater than the combination of serial software operating on serial hardware.
There is an insatiable appetite for performance in capital markets. Not just investment performance, but computational performance, as well – and more so with each passing day. Buy side, sell side, intermediary or vendor – and no matter the nature of the underlying strategies or mix of services – all of these firms will compete increasingly on output from high-performance computing (HPC) infrastructure. The ability to perform increasingly complex calculations on increasingly complex data flows, at higher update frequencies, is growing in today’s pantheon of competitive necessities. Navigating global markets no longer can be conducted with the low precision of overnight batch runs; it now demands the enhanced precision of intraday, near-real-time, and real-time calculations.

Parallelism – a topic that will be new to many, but one that TABB Group believes will quickly become a part of common technology vernacular in our business – offers a significant key to HPC challenges. On the backs of increasingly parallel storage, compute, and network architectures, specifically developed (and re-developed) software can now deliver performance that is orders of magnitude greater than the combination of serial software operating on serial hardware.

More info:

Categories:Types, Presentations
Published by: tabbforum on Jun 06, 2014
Copyright:Traditional Copyright: All rights reserved

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

09/16/2014

pdf

text

original

 
Parallelism or Paralysis: The Essential High Performance Debate | June 2014
2014 The TABB Group, LLC. All Rights Reserved. May not be reproduced by any means without express permission. |
0
 
Parallelism or Paralysis:
The Essential High Performance Debate
 
There is an insatiable appetite for performance in capital markets. Not just investment performance, but computational performance, as well – and more so with each passing day. Buy side, sell side, intermediary or vendor – and no matter the nature of the underlying strategies or mix of services – all of these firms will compete increasingly on output from high-performance computing (HPC) infrastructure. The ability to perform increasingly complex calculations on increasingly complex data flows, at higher update frequencies, is growing in today’s pantheon of competitive necessities. Navigating global markets no longer can be conducted with the low precision of overnight batch runs; it now demands the enhanced precision of intraday, near-real-time, and real-time calculations.
Parallelism
 – a topic that will be new to many, but one that TABB Group believes will quickly become a part of common technology vernacular in our business – offers a significant key to HPC challenges. On the backs of increasingly parallel storage, compute, and network architectures, specifically developed (and re-developed) software can now deliver performance that is orders of magnitude greater than the combination of serial software operating on serial hardware.
Fair warning:
 Parallel programming is hard. It will not be a challenge tackled by all trading firms and their solution providers, and certainly not for all use cases. There is no known way to decompose and parallelize many computational challenges of today. However, for those use cases that do apply, exceedingly few have been structured to exploit parallelism. This leaves an incredible wealth of performance capabilities lying untapped on nearly every computer system. This is a call to first movers to get up to speed on the competitive advantages of parallelism.
E. Paul Rowady, Jr.
V12:030 June 2014 www.tabbgroup.com
 
Data and Analytics
 
Parallelism or Paralysis: The Essential High Performance Debate | June 2014
2014 The TABB Group, LLC. All Rights Reserved. May not be reproduced by any means without express permission. |
1
 
Introduction
Riding an unprecedented and global wave of regulations, increased competition, the high pace of transformation, and increasingly complex data flows has come a growing need for high-performance capabilities. A few steps back from the bleeding edges of speed being explored by some trading firms exists a spectrum of computationally intensive use cases that have spawned an ongoing search for new methods and tools to employ more number-crunching horsepower. In many ways, these challenges – sometimes known as
throughput computing applications
 – are far more complex than the pure speed challenges. As such, this is a broad area of development – which we have been calling Latency 2.0: Bigger Workloads, Faster – that is more generally being addressed by high-performance computing (HPC) platforms.
Parallelism
is a topic within the overall HPC juggernaut that is emerging in both awareness and deployments because of its potential to respond to some of these performance demands. The benefits of parallelism can be achieved on multiple and complementary levels – ranging from storage architectures to compute architectures to network architectures – and already have demonstrated potential for dramatic performance gains. Modern server architectures are now increasingly parallel. Therefore, compute performance is now a function of the level of parallelism
enabled 
 by your software (
see Exhibit 1, below 
).
Exhibit 1 Only Parallel Software + Parallel Hardware = Parallel Performance
Source: TABB Group, Intel
Where graphical processing units (GPUs) were recently seen as the tool of choice to harvest the benefits of highly parallel processing for
general processing applications
, now new central processing unit (CPU) architectures are proving equally as powerful, yet at a lower total cost of ownership (TCO) – a more detailed comparison will follow.
 
Parallelism or Paralysis: The Essential High Performance Debate | June 2014
2014 The TABB Group, LLC. All Rights Reserved. May not be reproduced by any means without express permission. |
2
 
Computational Targets
Parallelism can significantly boost the performance and throughput of problems that are well suited for it – that is, large problems that can be decomposed easily into smaller problems that are then solved in parallel. It is technology’s answer to
divide et impera
 – divide and conquer. Truth be told, however, parallelism may not be right for all firms and definitely is not right for all use cases. On top of the hardware and code development challenges, there are applications with no known way to parallelize them today. For instance, those cases that are related to alpha discovery and capture are really tough to optimize because the out-of-sample behavior is so dynamic. From a programmer’s perspective, parallel programming brings a number of challenges to the table that don’t exist for sequential programming. These include a lack of basic developer tools (IDEs, compilers, debuggers, etc.), the added complexity of writing thread-safe parallel algorithms, and exposure to low-level programming languages such as C and C++. Specific capital markets examples of these include:
 
Derivatives pricing (including swaps) and volatility estimation;
 
Portfolio optimizations;
 
Credit value adjustment (CVA) and other “xVA” calculations; and,
 
Value-at-Risk (VaR), stress tests and other risk analytics. These examples represent a subset of a much broader spectrum of computational challenges that generally exhibit the greatest potential for parallelism, including those that use or require:
 
Linear Algebra:
 Used for matrix or vector multiplication; often a fundamental part of in finance, bioinformatics and fluid dynamics.
 
Monte Carlo Simulations:
 Taking a single function, executing it in parallel on independent data sets, and using the results to glean actionable knowledge – for example, identifying probabilistic outcomes, including “fat tail” events, in a financial portfolio.
 
Fast Fourier Transformations (FFTs):
 Converting time domains into frequency domains, and vice versa; essential for signal processing applications and useful for options pricing and other financial time series analysis, including applications in high-frequency trading.
 
Image Processing:
 With each pixel calculated separately, there is ample opportunity for parallelism – for example, facial recognition software.
 
Map/reduce:
 Taking large datasets and distributing the data filtering, sorting and aggregation workloads across multiple nodes in a compute cluster – for example, Google’s MapReduce or Hadoop.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->