1<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Optimizers/AdaGrad.php">[source]</a></span>
2
3# AdaGrad
4Short for *Adaptive Gradient*, the AdaGrad Optimizer speeds up the learning of parameters that do not change often and slows down the learning of parameters that do enjoy heavy activity. Due to AdaGrad's infinitely decaying step size, training may be slow or fail to converge using a low learning rate.
5
6## Parameters
7| # | Name | Default | Type | Description |
8|---|---|---|---|---|
9| 1 | rate | 0.01 | float | The learning rate that controls the global step size. |
10
11## Example
12```php
13use Rubix\ML\NeuralNet\Optimizers\AdaGrad;
14
15$optimizer = new AdaGrad(0.125);
16```
17
18## References
19[^1]: J. Duchi et al. (2011). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.