Download PDF by Mike Page (auth.), John A. Bullinaria BSc, MSc, PhD, David: 4th Neural Computation and Psychology Workshop, London, 9–11

By Mike Page (auth.), John A. Bullinaria BSc, MSc, PhD, David W. Glasspool BSc, Msc, George Houghton BA, MSc, PhD (eds.)

This quantity collects jointly refereed types of twenty-five papers offered on the 4th Neural Computation and Psychology Workshop, held at college university London in April 1997. The "NCPW" workshop sequence is now good tested as a full of life discussion board which brings jointly researchers from such varied disciplines as man made intelligence, arithmetic, cognitive technology, laptop technological know-how, neurobiology, philosophy and psychology to debate their paintings on connectionist modelling in psychology. the overall topic of this fourth workshop within the sequence used to be "Connectionist Repre­ sentations", an issue which not just attracted members from these kinds of fields, yet from allover the realm to boot. From the perspective of the convention organisers targeting representational concerns had the virtue that it instantly concerned researchers from all branches of neural computation. Being so significant either to psychology and to connectionist modelling, it's one sector approximately which each person within the box has their very own powerful perspectives, and the range and caliber of the displays and, simply as importantly, the dialogue which them, definitely attested to this.

Show description

Read Online or Download 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations PDF

Best psychology books

Download e-book for kindle: Autopilot — The Art and Science of Doing Nothing by Andrew Smart

]Andrew shrewdpermanent wishes you to take a seat and do not anything even more usually – and he has the technology to provide an explanation for why. At each flip we're driven to do extra, quicker and extra successfully: that drumbeat resounds all through our wage-slave society. Multitasking is not just a advantage, it's a need. Books similar to Getting issues performed, the single Minute supervisor, and The 7 conduct of powerful humans frequently best the bestseller lists, and feature spawned a substantial undefined.

Extra resources for 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations

Example text

The network output is given by Y(x)~ t. w, exp[ t ~± v;'(x] ~ c;)' 1 (2) where there are P inputs and N hidden units. If a norm weight is zero then the corresponding input line is ignored. A nonzero norm weight indicates the attention strength along the corresponding input. With a diagonal covariance matrix the network is only able to pay attention to individual orthogonal dimensions. The output of the Gaussian function is greatest when its argument is zero. A Gaussian RBF unit therefore has maximal output of one when the input matches the centre vector.

Therefore, even though a greater number of hidden layers facilitate the representation of more complex mappings with an MLP network, given J. A. Bullinaria et al. ), 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997 © Springer-Verlag London Limited 1998 48 Figure 1: Network model of interest the difficulty there is in computing the best number of layers, nothing much is gained in a network design process involving an elaborate decision step to decide on this issue. Furthermore as a series of empirical results, some of which are described in [8], have shown that for problems requiring more than one hidden layer there are more adequate knowledge representation forms, it was deemed sufficient to restrict the definition of the geometric interpretation under discussion to the one-hidden layer MLP shown in Figure 1.

The network using this propagation rule is known as a Conic Section Function Network (CSFN). Two different strategies have been used for training the network. The contact lens fitting problem has been considered to demonstrate the performance of the training algorithms. The performances of a standard MLP trained by back propagation, a fast back propagation with adapted learning rates, a standard RBFN using Matlab Neural Network software toolbox, and the proposed algorithm are compared for this particular problem.

Download PDF sample

Rated 4.33 of 5 – based on 8 votes