「view this page in B3 βῆτα server」

Revisions №58377

branch: master 「№58377」
Commited by: Vikram K. Mulligan
GitHub commit link: 「cca2c0985bc7701a」 「№995」
Difference from previous tested commit:  code diff
Commit date: 2015-12-24 16:57:38

Merge pull request #995 from RosettaCommons/vmullig/iterated_linmin Add a simple gradient-descent minimization algorithm The Hessian approximations in Rosetta really trip me up when I'm trying to use the minimizer for anything other than minimizing energy in torsion space. This pull request adds a simple Hessian-free iterative gradient descent algorithm for the minimizer (basically, iterated linmin calls with a convergence check after each iteration). Simple gradient descent (first-derivative based minimization) is generally less efficient than approaches that use the Hessian or an approximation thereof (first- and second-derivative based minimization), but there is no guarantee that the DFP or LBFGS approximations of the Hessian matrix work for all functions -- indeed they are poor approximations in some cases that mathematicians have characterized but the rest of us don't think about. Having the option of doing plain, vanilla gradient descent is therefore useful, at least for checking whether it's the Hessian approximation that's tripping up a minimization protocol. Tasks: - Add linmin_iterated and linmin_iterated_atol minimization flavours. - Add a function to do iterated gradient descent with convergence check. - Add a unit test. - Update documentation. - Add new minimization type to options_rosetta.py in branch aleaverfay/lbfgs_as_default (pull request #327). I'd suggest that we start using linmin_iterated or linmin_iterated_atol in minimizer and FastRelax benchmarks, sort of as our negative control (minimization with no Hessian approximation). It SHOULD converge more slowly (i.e. with more iterations) than dfpmin or lbfgs. If anything else converges more slowly, though, that should be a big red flag for that other algorithm.

...