Matrix Calculus for Deeplearning Part2
source link: https://kirankamath.netlify.app/blog/matrix-calculus-for-deeplearning-part2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Matrix Calculus for DeepLearning (Part2)
May 29, 2020
We can’t compute partial derivatives of very complicated functions using just the basic matrix calculus rules we’ve seenBlog part 1. For example, we can’t take the derivative of nested expressions like sum( w + x ) directly without reducing it to its scalar equivalent. We need to be able to combine our basic vector rules using the vector chain rule.
In paper they have defined and named three different chain rules.
- single-variable chain rule
- single-variable total-derivative chain rule
- vector chain rule
The chain rule comes into play when we need the derivative of an expression composed of nested subexpressions. Chain rule helps in solving problem by breaking complicated expressions into subexpression whose derivatives are easy to compute.
Single-variable chain rule
Chain rules are defined in terms of nested functions such as y=f(g(x)) for single variable chain rule.
Formula is
dy/dx = (dy/du) (du/dx)
There are 4 steps to solve using single variable chain rule
- Introduce intermediate variable
- compute derivatives of intermediate variables wrt(with respect to) their parameters.
- combine all derivatives by multiplying them together
- substitute intermediate variables back in derivative equation.
Lets see example of nested equation y = f (x) = ln (sin(x³ ) ² )
It is to compute the derivatives of the intermediate variables in isolation!
But single variable chain rule is applicable only when a single variable can influence output in only one way. As we see in example we can handle nested expression of single variable x using this chain ruleonly when x can effect y through single data flow path.
Single-variable total-derivative chain rule
If we apply single variable chain rule to y = f (x) = x + x² we get wrong answer, because derivative operator doesnot apply to multivariate functions. change in x in the equation , affects y both as operand og addition and as operand of square. so we clearly cant apply single variable chain rule. so…
we move to total derivatives.
which is to compute (dy/dx) , we need to sum up all possible contributions from changes in x to the change in y.
Formula for total derivative chain rule
Total derivative assumes all variables are potentially co-dependent where as partial derivative assumes all variables but x are constants.
when you take the total derivative with respect to x, other variables might also be functions of x so add in their contributions as well. The left side of the equation looks like a typical partial derivative but the right-hand side is actually the total derivative.
Lets see example,
total derivative formula always sums , that is sums up terms in the derivative. For example, given y = x × x² instead of y = x + x² , the total-derivative chain rule formula still adds partial derivative terms, for more detail see demonstration in paper.
Formula of total derivative can be simplified further.
This chain rule that takes into consideration the total derivative degenerates to the single-variable chain rule when all intermediate variables are functions of a single variable.
Vector chain rule
derivative of a sample vector function with respect to a scalar, y = f (x).
introduce two intermediate variables, g 1 and g 2 , one for each f i so that y looks more like y = f ( g (x))
If we split the terms, isolating the terms into a vector, we get a matrix by vector.
This completes chain rule. In next blog that is part3 we will see how we can apply this gradient of neural activation and loss function and wrap up.
Thank you.
Useful Points:
It is difficult while writing blog in markdown to convert to superscript and subscript so I have listed down , which you can use ( copy paste) in your markdown
super script ⁰ ¹ ² ³ ⁴ ⁵ ⁶ ⁷ ⁸ ⁹ ᵃ ᵇ ᶜ ᵈ ᵉ ᶠ ᵍ ʰ ᶦ ʲ ᵏ ˡ ᵐ ⁿ ᵒ ᵖ ʳ ˢ ᵗ ᵘ ᵛ ʷ ˣ ʸ ᶻ
subscript ₀ ₁ ₂ ₃ ₄ ₅ ₆ ₇ ₈ ₉ ₐ ᵦ ₑ ₕ ᵢ ⱼ ₖ ₗ ₘ ₙ ₒ ₚ ᵩ ᵣ ₛ ₜ ᵤ ᵥ ₓ ᵧ
# Blog 10
Recommend
-
10
Fireeye Mandiant 2014 安全报告 Part2 insight-labs...
-
6
Webshell-Part1&Part2 virustracker
-
83
-
45
说明 之前翻译的一个教程(没有备份原地址,梯子被封了)。原地址找到后补上 正文 使用预设的记录器可以节省时间,但如果确定要调整记录器,需要探索自定义记录器的方法 使用zap配置结...
-
24
Terence Parr and Jeremy Howard (We teach in University of San Francisco's
-
30
声明:本系列文章面向的读者需要看过Raft论文或者对Raft有一定的了解,如果没有看过论文或者不了解Raft,建议先去学习后再来看,否则会比较难懂。 紧接着上一篇的内容,继续探索Raft的leader选举、日志复制、安全性等等实现细节...
-
11
Enhance your frontend state management with view models — Part2Under the hood: design goals & talking about the view model implementation based on
-
3
unsafe 真就不安全吗?- part2 polaris1119 · 大约10小时之前 · 66 次点击 · 预计阅读时间 6 分钟 · 大...
-
6
@thedevopsguyThe Devops GuyDevops and developer || AWS, Terraform, Saltstack and more || Golang, Python, JS
-
8
在本速率限制系列的第一篇文章中,介绍了实施速率...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK