I am compiling a list of numerical packages in modern* programming languages. If you have any comments about these packages, or know of any I am not familiar with, please e-mail me at mds (at sign) soe dot ucsc dot edu.
|Package||Language||Developer||Usability Notes||Efficiency Notes (space and time)||Completeness notes||Likelihood Mike will use it|
|Gnu Scientific Library (GSL)||C||Various (Gnu)||More later||More later||Lots of stuff, more later.||Collaborator is currently using|
|JAMA (A Java Matrix Package)||Java||NIST||
Seperate classes for "Matrix", "CholeskyDecomposition", "LUDecomposition", etc. Which, frankly, is the correct way to handle these matrix decompositions.
Open source. Public interfaces are very nice, internals are written in that incomprehensible "Numerical Recipes in XXX" style
Written in Java
Seperate classes for various decomposition is correct way to handle efficiency and usability together.
Dense matrices only. Contains most of what I'd want.
Actually, would like a little extra functionality (simplex method would be nice)
|Matrix tools for Java||Java||Pointed out by David Garmire|
|JamPack||Java||NIST and UMD||Previous information posted here was either incorrect or out-of-date.||Previous information posted here was either incorrect or out-of-date.||Previous information posted here was either incorrect or out-of-date.||Reconsidering||BOOST BLAS (basic linear algebra subroutines)||C++ (heavily templatized)||BOOST development team.||Most of the BOOST team's software is very usable once you move up to large software projects. Not for the template-phobic.||See justification notes. Appears to be an experiment to see if C++ matrix code can compete with FORTRAN. Templates are the correct way to do this||Dense and Sparse matrices, apparent compliance with "BLAS",||Likely|
|Template Numerical Toolkit||C++ (templatized)||NIST (apparently mostly this guy)||More later||More later||More later||More later|
|Blitz++||C++ (again, templatized)||Todd Veldhuizen et al||Sample code looks fairly readable, Only works with some compilers.||
Claims "performance on par with Fortran 77/90"
Update: Performace claims are well documented
|Dense matrices, multidimensional arrays and tensors. Parallel computing support.||I'll try Boost first, but good to know its there.|
|POOMA||C++ (templatized - Notice a theme here?)||LANL||More here later||Parrallel, templatized, claims "comparable to Fortran"||Parrallel, geared around PDEs||I'll try Boost first, but good to know its there.|
|SciPy||EnThought plus others. (Too bad their developers are in Austin)||Python (see this for more info)||Don't know yet, but regular python is one of the easiest languages ever to write small programs and software glue in.||Probably depends on how much of your computation is inside the library, and how much is in Python. Python's language definition almost forces it to be slow. Python's language definition also makes it incredibly easy for Python to wrap efficient routines written in other languages.||Wraps LaPack||
Gotta try it, at least it'll make a free Matlab competitor with better C++ integration
|PSciCo||SML (I kid you not)||The ML mafia at CMU||Probably depends on how you feel about SML. Apparently it is far easier to write parrallel code in SML variants then in C/C++.||Aimed at parrallelism. No mention of efficiency on the web page, but its concievable that as compiler technology improves and all computing becomes a lot more distributed, an SML program could outperform older languages.||Minimal matrix support (simple operations and gauss-jordan, enough for most computer graphics), heavy emphasis on computational geometry (going so far as to split computational geometry from computational topology). Both seem to indicate that the intended audience may be robotics researchers rather then traditional scientific computing researchers.||Cool experiment, I'll hold off on it though.|
*To avoid pedantic arguments, this should read "modern languages and C".