26 lines
7.4 KiB
Text
26 lines
7.4 KiB
Text
Running on local URL: http://127.0.0.1:7860
|
|
|
|
To create a public link, set `share=True` in `launch()`.
|
|
Updating path
|
|
Selected DOC path: linear_algebra_for_nn.mmd
|
|
FOUND DOC_PATH ../documents/mmds/linear_algebra_for_nn.mmd
|
|
..\documents\vector_db\intfloat\multilingual-e5-large\linear_algebra_for_nn.mmd found, not recreating a vector store
|
|
doc_name: linear_algebra_for_nn.mmd
|
|
input_user: Explain the projection of one vector onto another vector
|
|
search results: [Document(page_content='### Projection of one vector onto another vector\n\nThe (orthogonal) projection of vector $\\boldsymbol{x}$ on vector $\\boldsymbol{w}$ is defined as\n\n$$\\mathsf{proj}_{\\boldsymbol{w}}\\boldsymbol{x}=\\frac{\\boldsymbol{x}^{\\mskip-1.5mu \\mathsf{T}} \\boldsymbol{w}}{\\boldsymbol{w}^{\\mskip-1.5mu \\mathsf{T}}\\boldsymbol{w}} \\boldsymbol{w}=\\mathsf{cos}(\\boldsymbol{x},\\boldsymbol{w})\\times\\frac{\\| \\boldsymbol{x}\\|}{\\|\\boldsymbol{w}\\|}\\boldsymbol{w}. \\tag{3}$$'), Document(page_content='This technique can be extended to matrix and vector functions. It involves the notion of gradient and Hessian. Now a vector function $f\\left(\\boldsymbol{x}\\right)$ is expressed as:\n\n$$f\\left(\\boldsymbol{x}\\right)=f\\left(\\boldsymbol{a}\\right)+f\\left(\\boldsymbol{ x-a}\\right)^{\\mathsf{T}}\\boldsymbol{\\nabla}_{\\boldsymbol{f(a)}}+f\\left( \\boldsymbol{x-a}\\right)^{\\mathsf{T}}\\boldsymbol{\\nabla}_{\\boldsymbol{f(a)}}^ {2}f\\left(\\boldsymbol{x-a}\\right)+\\mathcal{R}_{2}. \\tag{31}$$'), Document(page_content='So the activation is proportional to the norm of the projection of the input vector onto the weight vector. The _response_ or _output_ of the cell is denoted $o$. For a _linear cell_, it is proportional to the activation (for convenience, assume that the proportionality constant is equal to $1$). _Linear heteroassociators_ and _autoassociators_ are made of linear cells. In general, the output of a cell is a _function_ (often, but not necessarily, continuous), called the _transfer function_, of its'), Document(page_content='The weight vector is corrected by moving it in the opposite direction of the gradient. This is obtained by adding a small vector denoted $\\boldsymbol{\\Delta}_{\\boldsymbol{w}}$ opposite to the gradient. This gives the following correction for iteration $n+1$:')]
|
|
FOUND DOC_PATH ../documents/mmds/linear_algebra_for_nn.mmd
|
|
..\documents\vector_db\intfloat\multilingual-e5-large\linear_algebra_for_nn.mmd found, not recreating a vector store
|
|
doc_name: linear_algebra_for_nn.mmd
|
|
input_user: Explain the projwx formula
|
|
search results: [Document(page_content='The norm of $\\mathsf{proj}_{\\boldsymbol{w}}\\boldsymbol{x}$ is its distance to the origin of the space. It is equal to\n\n$$\\|\\mathsf{proj}_{\\boldsymbol{w}}\\boldsymbol{x}\\|=\\frac{|\\boldsymbol{x}^{ \\mskip-1.5mu \\mathsf{T}}\\boldsymbol{w}|}{\\|\\boldsymbol{w}\\|}=|\\mathsf{ cos}(\\boldsymbol{x},\\boldsymbol{y})|\\times\\|\\boldsymbol{x}\\|\\enspace. \\tag{4}$$'), Document(page_content='$\\boldsymbol{x}$, and $\\boldsymbol{w}$, the activation of the output cell is obtained as'), Document(page_content='$$\\boldsymbol{w}_{[n+1]}=\\boldsymbol{w}_{[n]}+\\boldsymbol{\\Delta}_{\\boldsymbol{ w}}=\\boldsymbol{w}_{[n]}-\\eta\\frac{\\partial e}{\\partial\\boldsymbol{w}}= \\boldsymbol{w}_{[n]}+\\eta(t-\\boldsymbol{w}^{\\mathsf{T}}\\boldsymbol{x}) \\boldsymbol{x}=\\boldsymbol{w}_{[n]}+\\eta(t-o)\\boldsymbol{x}. \\tag{35}$$\n\nThis gives the rule defined by Equation 9.'), Document(page_content='### Projection of one vector onto another vector\n\nThe (orthogonal) projection of vector $\\boldsymbol{x}$ on vector $\\boldsymbol{w}$ is defined as\n\n$$\\mathsf{proj}_{\\boldsymbol{w}}\\boldsymbol{x}=\\frac{\\boldsymbol{x}^{\\mskip-1.5mu \\mathsf{T}} \\boldsymbol{w}}{\\boldsymbol{w}^{\\mskip-1.5mu \\mathsf{T}}\\boldsymbol{w}} \\boldsymbol{w}=\\mathsf{cos}(\\boldsymbol{x},\\boldsymbol{w})\\times\\frac{\\| \\boldsymbol{x}\\|}{\\|\\boldsymbol{w}\\|}\\boldsymbol{w}. \\tag{3}$$')]
|
|
FOUND DOC_PATH ../documents/mmds/linear_algebra_for_nn.mmd
|
|
..\documents\vector_db\intfloat\multilingual-e5-large\linear_algebra_for_nn.mmd found, not recreating a vector store
|
|
doc_name: linear_algebra_for_nn.mmd
|
|
input_user: Write the Hessian matrix to flex your latex capacities
|
|
search results: [Document(page_content='This technique can be extended to matrix and vector functions. It involves the notion of gradient and Hessian. Now a vector function $f\\left(\\boldsymbol{x}\\right)$ is expressed as:\n\n$$f\\left(\\boldsymbol{x}\\right)=f\\left(\\boldsymbol{a}\\right)+f\\left(\\boldsymbol{ x-a}\\right)^{\\mathsf{T}}\\boldsymbol{\\nabla}_{\\boldsymbol{f(a)}}+f\\left( \\boldsymbol{x-a}\\right)^{\\mathsf{T}}\\boldsymbol{\\nabla}_{\\boldsymbol{f(a)}}^ {2}f\\left(\\boldsymbol{x-a}\\right)+\\mathcal{R}_{2}. \\tag{31}$$'), Document(page_content='When a function is twice differentiable, the second order derivatives are stored in a matrix called the _Hessian_ matrix of the function. It is often denoted by $\\mathbf{H}$ or $\\mathbf{\\nabla}_{\\mathbf{f}}^{\\mathsf{2}}$ and is formally defined as'), Document(page_content="_Newton's method_ is a second order Taylor approximation, it uses the inverse of the Hessian of $\\boldsymbol{w}$ (supposing it exists). It gives a better numerical approximation but necessitates more computation. Here the correction for iteration $n+1$ is\n\n$$\\boldsymbol{w}_{[n+1]}=\\boldsymbol{w}_{[n]}+\\boldsymbol{\\Delta}=\\boldsymbol{w} _{[n]}-(\\boldsymbol{H}^{-1})(\\boldsymbol{\\nabla}_{\\boldsymbol{f(w)}}) \\tag{36}$$\n\n(where $\\boldsymbol{\\nabla}_{\\boldsymbol{f(w)}}$ is computed for $\\boldsymbol{w}_{[n]}$)."), Document(page_content='So the activation is proportional to the norm of the projection of the input vector onto the weight vector. The _response_ or _output_ of the cell is denoted $o$. For a _linear cell_, it is proportional to the activation (for convenience, assume that the proportionality constant is equal to $1$). _Linear heteroassociators_ and _autoassociators_ are made of linear cells. In general, the output of a cell is a _function_ (often, but not necessarily, continuous), called the _transfer function_, of its')]
|
|
FOUND DOC_PATH ../documents/mmds/linear_algebra_for_nn.mmd
|
|
..\documents\vector_db\intfloat\multilingual-e5-large\linear_algebra_for_nn.mmd found, not recreating a vector store
|
|
doc_name: linear_algebra_for_nn.mmd
|
|
input_user: sadly it doesn't render proprely ...
|
|
search results: [Document(page_content='## References'), Document(page_content='from:\n\n**N.J., Smelter, & P.B., Baltes (Eds.) (2001).**\n\n**Encyclopedia of the Social and Behavioral Sciences.**\n\n**London: Elsevier Science.**\n\n**Article Title: Linear Algebra for Neural Networks**\n\n**By: Herve Abdi**\n\n**Author Address:** Herve Abdi, School of Human Development, MS: Gr.4.1, The University of Texas at Dallas, Richardson, TX 750833-0688, USA\n\n**Phone:** 972 883 2065, **fax:** 972 883 2491 **Date:** June 1, 2001\n\n**E-mail:** herve@utdallas.edu\n\n**Abstract**'), Document(page_content='$$o=f\\left(a\\right)\\enspace. \\tag{6}$$\n\nFor example, in _backpropagation networks_, the (nonlinear) transfer function is usually the logistic function\n\n$$o=f\\left(a\\right)=\\operatorname{logist}\\boldsymbol{w}^{\\mskip-1.5mu \\mathsf{T}} \\boldsymbol{x}=\\frac{1}{1+\\exp\\{-a\\}}\\enspace. \\tag{7}$$'), Document(page_content='$\\boldsymbol{x}$, and $\\boldsymbol{w}$, the activation of the output cell is obtained as')]
|
|
Keyboard interruption in main thread... closing server.
|