SHOGUN  4.1.0
 全部  命名空间 文件 函数 变量 类型定义 枚举 枚举值 友元 宏定义  
所有成员列表 | Public 成员函数 | Protected 成员函数 | Protected 属性
AdaGradUpdater类 参考

详细描述

The class implements the AdaGrad method.

\[ \begin{array}{l} g_\theta={(\frac{ \partial f(\cdot) }{\partial \theta })}^2+g_\theta\\ d_\theta=\alpha\frac{1}{\sqrt{g_\theta+\epsilon}}\frac{ \partial f(\cdot) }{\partial \theta }\\ \end{array} \]

where \( \frac{ \partial f(\cdot) }{\partial \theta } \) is a negative descend direction (eg, gradient) wrt \(\theta\), \(\epsilon\) is used to avoid dividing by 0, \( \alpha \) is a build-in learning rate \(d_\theta\) is a corrected negative descend direction.

Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." The Journal of Machine Learning Research 12 (2011): 2121-2159.

在文件 AdaGradUpdater.h56 行定义.

类 AdaGradUpdater 继承关系图:
Inheritance graph
[图例]

Public 成员函数

 AdaGradUpdater ()
 
 AdaGradUpdater (float64_t learning_rate, float64_t epsilon)
 
virtual ~AdaGradUpdater ()
 
virtual void set_learning_rate (float64_t learning_rate)
 
virtual void set_epsilon (float64_t epsilon)
 
virtual void update_context (CMinimizerContext *context)
 
virtual void load_from_context (CMinimizerContext *context)
 
virtual void update_variable (SGVector< float64_t > variable_reference, SGVector< float64_t > raw_negative_descend_direction, float64_t learning_rate)
 
virtual void set_descend_correction (DescendCorrection *correction)
 
virtual bool enables_descend_correction ()
 

Protected 成员函数

virtual float64_t get_negative_descend_direction (float64_t variable, float64_t gradient, index_t idx, float64_t learning_rate)
 

Protected 属性

float64_t m_build_in_learning_rate
 
float64_t m_epsilon
 
SGVector< float64_tm_gradient_accuracy
 
DescendCorrectionm_correction
 

构造及析构函数说明

在文件 AdaGradUpdater.cpp36 行定义.

AdaGradUpdater ( float64_t  learning_rate,
float64_t  epsilon 
)

Parameterized Constructor

参数
learning_ratelearning_rate
epsilonepsilon

在文件 AdaGradUpdater.cpp42 行定义.

~AdaGradUpdater ( )
virtual

在文件 AdaGradUpdater.cpp64 行定义.

成员函数说明

virtual bool enables_descend_correction ( )
virtualinherited

Do we enable descend correction?

返回
whether we enable descend correction

在文件 DescendUpdaterWithCorrection.h145 行定义.

float64_t get_negative_descend_direction ( float64_t  variable,
float64_t  gradient,
index_t  idx,
float64_t  learning_rate 
)
protectedvirtual

Get the negative descend direction given current variable and gradient

It will be called at update_variable()

参数
variablecurrent variable (eg, \(\theta\))
gradientcurrent gradient (eg, \( \frac{ \partial f(\cdot) }{\partial \theta }\))
idxthe index of the variable
learning_ratelearning rate (for AdaGrad, learning_rate is NOT used because there is a build-in learning_rate)
返回
negative descend direction (that is, \(d_\theta\))

实现了 DescendUpdaterWithCorrection.

在文件 AdaGradUpdater.cpp98 行定义.

void load_from_context ( CMinimizerContext context)
virtual

Return a context object which stores mutable variables Usually it is used in serialization.

This method will be called by FirstOrderMinimizer::load_from_context(CMinimizerContext* context)

返回
a context object

重载 DescendUpdaterWithCorrection .

在文件 AdaGradUpdater.cpp87 行定义.

virtual void set_descend_correction ( DescendCorrection correction)
virtualinherited

Set the type of descend correction

参数
correctionthe type of descend correction

在文件 DescendUpdaterWithCorrection.h135 行定义.

void set_epsilon ( float64_t  epsilon)
virtual

Set epsilon

参数
epsilonepsilon must be positive

在文件 AdaGradUpdater.cpp57 行定义.

void set_learning_rate ( float64_t  learning_rate)
virtual

Set learning rate

参数
learning_ratelearning rate

在文件 AdaGradUpdater.cpp50 行定义.

void update_context ( CMinimizerContext context)
virtual

Update a context object to store mutable variables

This method will be called by FirstOrderMinimizer::save_to_context()

参数
contexta context object

重载 DescendUpdaterWithCorrection .

在文件 AdaGradUpdater.cpp75 行定义.

void update_variable ( SGVector< float64_t variable_reference,
SGVector< float64_t raw_negative_descend_direction,
float64_t  learning_rate 
)
virtual

Update the target variable based on the given negative descend direction

Note that this method will update the target variable in place. This method will be called by FirstOrderMinimizer::minimize()

参数
variable_referencea reference of the target variable
raw_negative_descend_directionthe negative descend direction given the current value
learning_ratelearning rate

重载 DescendUpdaterWithCorrection .

在文件 AdaGradUpdater.cpp108 行定义.

类成员变量说明

float64_t m_build_in_learning_rate
protected

learning_rate \( \alpha \) at iteration

在文件 AdaGradUpdater.h134 行定义.

DescendCorrection* m_correction
protectedinherited

descend correction object

在文件 DescendUpdaterWithCorrection.h165 行定义.

float64_t m_epsilon
protected

\( epsilon \)

在文件 AdaGradUpdater.h137 行定义.

SGVector<float64_t> m_gradient_accuracy
protected

\( g_\theta \)

在文件 AdaGradUpdater.h140 行定义.


该类的文档由以下文件生成:

SHOGUN 机器学习工具包 - 项目文档