**Relevant content viewed in the document**: ## References ----------------------------------- **Relevant content viewed in the document**: from: **N.J., Smelter, & P.B., Baltes (Eds.) (2001).** **Encyclopedia of the Social and Behavioral Sciences.** **London: Elsevier Science.** **Article Title: Linear Algebra for Neural Networks** **By: Herve Abdi** **Author Address:** Herve Abdi, School of Human Development, MS: Gr.4.1, The University of Texas at Dallas, Richardson, TX 750833-0688, USA **Phone:** 972 883 2065, **fax:** 972 883 2491 **Date:** June 1, 2001 **E-mail:** herve@utdallas.edu **Abstract** ----------------------------------- **Relevant content viewed in the document**: $$o=f\left(a\right)\enspace. \tag{6}$$ For example, in _backpropagation networks_, the (nonlinear) transfer function is usually the logistic function $$o=f\left(a\right)=\operatorname{logist}\boldsymbol{w}^{\mskip-1.5mu \mathsf{T}} \boldsymbol{x}=\frac{1}{1+\exp\{-a\}}\enspace. \tag{7}$$ ----------------------------------- **Relevant content viewed in the document**: $\boldsymbol{x}$, and $\boldsymbol{w}$, the activation of the output cell is obtained as -----------------------------------