{"title": "Modeling Applications with the Focused Gamma Net", "book": "Advances in Neural Information Processing Systems", "page_first": 143, "page_last": 150, "abstract": null, "full_text": "Modeling Applications with the Focused Gamma Net \n\nJose C. Principe, Bert de Vries, Jyh-Ming Kuo and Pedro Guedes de Oliveira\u00b7 \n\nDepartment of Electrical Engineering \nUniversity of Florida, CSE 447 \nGainesville, FL 32611 \nprincipe@synapse.ee.ufl.edu \n\n*Departamento EletronicalINESC \nUniversidade de Aveiro \nA veiro, Portugal \n\nAbstract \n\nThe focused gamma network is proposed as one of the possible \nimplementations of the gamma neural model. The focused gamma \nnetwork is compared with the focused backpropagation network and \nTDNN for a time series prediction problem, and with ADALINE in \na system identification problem. \n\n1 \n\nINTRODUCTION \n\nAt NIPS-90 we introduced the gamma neural model, a real time neural net for \ntemporal processing (de Vries and Principe, 1991). This model is characterized by a \nneural short term memory mechanism, the gamma memory structure, which is \nimplemented as a tapped delay line of adaptive dispersive elements. The gamma \nmodel seems to provide an integrative framework to study the neural processing of \ntime varying patterns (de Vries and Principe, 1992). In fact both the memory by \ndelays as implemented in TDNN (Lang et aI, 1990) and memory by local feedback \n(self-recurrent loops) as proposed by Jordan (1986), and Elman (1990) are special \ncases of the gamma memory structure . The preprocessor utilized in Tank's and \nHopfield concentration in time (CIT) network (Tank and Hopfield, 1989) can be \nshown to be very similar to the dispersive structure utilized in the gamma memory \n(deVries, 1991). We studied the gamma memory as an independent adaptive filter \nstructure (Principe et ai, 1992), and concluded that it is a special case of a class of \nIIR (infinite impulse response) adaptive filters, which we called the generalized \nfeedforward structures . For these structures, the well known Wiener-Hopf solution to \nfind the optimal filter weights can be analytically computed . One of the advantages \nof the gamma memory as an adaptive filter is that. although being a recursive \nstructure. stability is easily ensured . Moreover. the LMS algorithm can be easily \n\n143 \n\n\f144 \n\nPrincipe, de Vries, Kuo, and de Oliveira \n\nextended to adapt all the filter weights, including the parameter that controls the \ndepth of memory, with the same complexity as the conventional LMS algorithm (i.e. \nthe algorithm complexity is linear in the number of weights). Therefore, we achieved \na theoretical framework to study memory mechanisms in neural networks. \n\nIn this paper we compare the gamma neural model with other well established neural \nnetworks that process time varying signals. Therefore the first step is to establish a \ntopology for the gamma model. To make the comparison easier with respect to TDNN \nand Jordan's networks, we will present our results based on the focused gamma \nnetwork. The focused gamma network is a multilayer feedforward structure with a \ngamma memory plane in the first layer (Figure 1). The learning equations for the \nfocused gamma network and its memory characteristics will be addressed in detail. \nExamples will be presented for prediction of complex biological signals \n(electroencephalogram-EEG) and chaotic \ntime series, as well as a system \nidentification example. \n\n2 \n\nTHE FOCUSED GAMMA NET \n\nThe focused neural architecture was introduced by Mozer (1988) and Stornetta et al \n(1988). It is characterized by a a two stage topology where the input stage stores \ntraces of the input signal, followed by a nonlinear continuous feedforward mapper \nnetwork (Figure 1). The gamma memory plane represents the input signal in a time(cid:173)\nspace plane (spatial dimension M, temporal dimension K). The activations in the \nmemory layer are Iik(t), and the activations in the feedforward network are \nrepresented by xi(t). Therefore the following equations apply respectively for the \ninput memory plane and for the feedforward network, \n\nIo(t) = Ii(t) \nIik(t) = (1-~)Iik(t-1)+Jl/j,k_l(t-1),i=1, ... ,M;k=1, ... ,K. \n\nXj(t) = (J(Lwijxj(t) + LWijkIjk(t\u00bb \n\n, i=1, ... ,N. \n\nj < i \n\nj, k \n\n(1) \n\n( 2) \n\nwhere ~i is an adaptive parameter that controls the depth of memory (Principe et aI, \n1992), and Wijk are the spatial weights. Notice that the focused gamma network for \nK=1 is very similar to the focused-backpropagation network of Mozer and Stornetta. \nMoreover, when Jl= I the gamma memory becomes a tapped delay line which is the \nconfiguration utilized in TDNN, with the time-to-space conversion restricted to the \nfirst layer (Lang et aI, 1990). Notice also that if the nonlinear feedforward mapper is \nrestricted to one layer of linear elements, and Jl=1, the focused gamma memory \nbecomes the adaptive linear combiner - ADALINE (Widrow et al,1960). \n\nIn order to better understand the computational properties of the gamma memory we \ndefined two parameters, the mean memory depth D and memory resolution R as \n\nK \nD=-\n~ \n\nR=-=Jl \n\nK \nD \n\n(3) \n\n\fModeling Applications with the Focused Gamma Net \n\n145 \n\n(de Vries, 1991). Memory depth measures how far into the past the signal conveys \ninformation for the processing task, while resolution quantifies the temporal \nproximity of the memory traces. \n\nFigure 1. The focus gamma network architecture \n\nThe important aspect in the gamma memory formalism is that Il, which controls both \nthe memory resolution and depth, is an adaptive parameter that is learned from the \nsignal according to the optimization of a performance measure. Therefore the \nfocused gamma network always works with the optimal memory depth/ resolution for \nthe processing problem. The gamma memory is an adaptive recursive structure, and \nas such can go unstable during adaptation. But due to the local feedback nature of \nG(z), stability is easily ensured by keeping O