Joshi and Kwong in this paper propose a strategy for calculating XVAs in American Monte-Carlo that avoids using the regression to calculate directly the exposure. Instead they use it only to obtain the exposure sign. They report that this gives a good improvement in accuracy.
Has anyone tried this and compared it to the standard approach over a wide range of products in a portfolio, to assess its practicality? In particular I’m wondering about its feasibility as being done at portfolio level, it may lead to regressions in a large number of dimensions.
As far as we can see there seems to be an agreement for discounting cash-flows in currency CY1, collateralized in currency CY2, using the CY1 cross-currency curves with basis CY1/CY2. This is simple enough when the CSA only allows CY2 as posting currency, but when several currencies are eligible with an option to choose, this should affect the discounting methodology. We have seen the Cheapest-To-Deliver (CTD) curves method in several software, where a blended curve is constructed from the optimal forward rates of the relevant cross-currency curves.
We do not know, however, how widespread this methodology is, and what companies are doing when the CSA is not only allowing cash but also securities such as bonds, possibly in several currencies. Furthermore, some CSAs appear to have posting conditions depending on the credit quality, which makes things even more complex.
We’d be interested to hear any experiences and opinions on the methodologies employed out there for the CSA types above.
By ‘Mark-To-Market’ I mean cross-currency swaps with resetting notionals. As far as I know, many cross-currency basis quotes are meant for this type of swaps rather than for constant notional ones.
One may decide to value all the cash-flows only with yield curves and simply bootstrap the cross-currency curve under this assumption. But in principle there should be a quanto in the pricing of such swaps, and I’d expect to need a hybrid FX-IR model to calculate it.
Has somebody seen that done somewhere, or is this issue always handled by a simple quanto-less bootstrap?
The rates becoming negative in JPY have brought this subject to my attention. I started the implementation of Free-Boundary SABR (let’s call it FSABR below). I had some practical problems at and around ATM as there are divisions of very small numbers by very small numbers, but this can be resolved by Taylor expansions.
Now I get a pretty good match against Monte-Carlo in the non-correlated case, but things get not so good in the correlated case. And the problem here is that I’m trying to match a complicated and approximated closed-form (only exact for 0 correlation), to a complicated Monte-Carlo with problems reported at 0. Indeed, the FSABR paper explains that a simple Euler scheme can go quite wrong at 0, and that’s what I’ve been using in my tests. So, it’s difficult to say if the mismatches I’m seeing are due to a mistake in the implementation of the correlation map or if it’s the Monte-Carlo that is not accurate enough.
A few questions to the readers:
- Has somebody tried the above and seen similar problems as mine (or other problems)?
- Has somebody tried to compare their implementation against Numerix’s?
- Has somebody used Numerix’s FSABR and how good is it?
- Has somebody implemented the other models suggested by Antonov, i.e. the family of the Mixture SABR?
I’m writing a set of xaml styles defined in a separate WPF library, so that I can have the same styles both in Windows applications and in Excel. But while my WPF application can see them, the Excel addin (VSTO + ExcelDna) cannot find them. Has anybody tried to do similar things before? I’ve posted this question in the ExcelDna forum but with no answer yet.