Here are some pictures of Institut Mittag-Leffler, and of the participants at the Stochastic Partial Differential Equations Workshop.
Archive for the ‘Maths’ Category
This is really obvious stuff that you might find usefull if you’re reading Krylov’s papers on Bellman’s PDE. In fact it’s so obvious that he doesn’t even bother stating it.
Or have a look here for more general form of Young’s inequality.
Hurray, I’ve now done my end of year report talk. I really didn’t like the idea of having to do it. The reason? It was meant to cover all of what I’ve been working on for a year and it wasn’t meant to last more than 30 minutes.
I haven’t managed to finish in 30 minutes (I don’t know who came up with this time limit). My talk still lasted nearly an hour and yet I could go into any detail with most things. I guess the only thing I can now do with the slides is to put them online. I’m sure I’ll be able to recycle them soon enough.
So now I only have to finish the end of year report and I’m ready to have a holiday. Unfortunately, that might not be so easy, because I uncovered a slight innacuracy in one of my proofs. So what I have here and probably here is not strictly speaking correct (the problem is that p_0, as I have it there, is not measurable with respect to the sigma algebra F_t_0, but rather F_t_1). I still maintain, that it’s just a technical wrinkle that can be ironed out easily enough. It’s just annoying that it was in one of the proofs that I wanted to go into more detail during the talk, because I quite like it.
I finally have the generalization that will show that randomized stopping can’t give better results than optimal stopping. It can be found here. While in the previous post I only claimed I can do it for step functions, here I show that it can be done for any continous and adapted function satisfying a basic integrability condition. I think one could also drop the assumption on continuity. This could probably be done in the same way as it is done in the construction of the Ito integral. I don’t think I’ll have to time to look into that any time soon though.
It seems that the previous note is just a result of a more general principle. This is the first post attempting to figure out this generalization. So there you go, the step 1 is: here. So far it only works for adepted processes which also must be step-wise. And it’s only the part showing that randomized stopping can’t be better than optimal stoping.
Ok first post about mathematics. Nothing new here, but anyway; it’s just to show that I finally got to the “lets write things up before I forget it all” stage.
I’ve updated my Matpack patch to work with Tiger. It’s right here. In the previous post I mentioned a problem with void ftime(timeb* tb). That has been fixed and so random number generation should work as advertised.
Oh yes and one has to have the DYLD_LIBRARY_PATH environmental variable set so that it contains the directory where the matpack dynamic library is installed.
Matpack is quite nice and user friendly c++ library for doing some numerical computation and visualisation. Unfortunately it won’t compile on OS without some (minor) changes. I’ve made a patch against the most recent version that at least compiles.
– uses the native VecLib framework for doing linear algebra computation (e.g. matrix inverse), so it should be reasonably efficient in that respect. On other OSes it would use BLAS, but I would expect the native framework to work better.
– compiles as a dynamic library
– assumes libpng is installed (I used fink)
– it introduces a small bug, namely axes in 3D plots will have the numerical values displayed only in the exponential format, see mpaxis.cc
– Random number generators might be a bit dodgy (even though it passed the tests in the test directory). See void ftime(timeb* tb) in compat.cc
If you give it a go, comments are welcome.