iSixSigma

Neural Network

Six Sigma – iSixSigma Forums Old Forums General Neural Network

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #29816

    aush
    Participant

    Can anyone guide me on where to find info. or what is Neural Network  for establishing relationship between data.

    0
    #77043

    Sambuddha
    Member

    Aush,
    A neural network (or artificial neural network, ANN) is a set of mathematical tools used for various pattern recognition and forecasting models involving multiple inputs.
    The mathematical model is based on the way human memory/brain operates – mainly by training the neurons (nerve cells) and retaining relationships (positive/negative) between inputs and outputs.
    In the ANN, “neurons” are nothing but a placeholder of association values or weight in a matrix format that “holds” the inputs’ relationship to the outputs. As you train the network, the data conditions the network to associate certain changes in inputs to the change in outputs.
    I will recommend a book that I read for a grad course in ANN. here they are:
    http://web.bestwebbuys.com/books/search?t=ISBN&q=0-07-025966-6
    Use Matlab’s neural network toolbox for this purpose. Don’t try to to program by yourself. That will be re-inventing the fire.
    There are lots of different network “architecture”suitable for different applications – Backpropagattion is one of the most common. Radial Basis Function is one I have used for predicting time series (works ok with certain limitations…be careful). There are a lot of interesting models.
    If you are interested in using ANN, I suggest that you take a course to do justice to your learning. If you want more information, reply to this post, and I will try to help you. Luckily I have an ANN application in one of my current projects.
    Hope that helps,
    Best,
    Sambuddha
     
     
     

    0
    #77050

    Robert Butler
    Participant

      Neural nets like MARS, are just one of a number of black box non-linear regression techniques.  Early papers on the subject tried to compare the functioning of the nets with that of the brain.  Current papers and books have backed away from such claims. Black box regression methods are acceptable if you are only interested in mapping inputs to outputs and do not care about cause and effect. Thus neural nets have had demonstated success in detection of credit card fraud and determining the liquid level in a complex shaped vessels.  
       The biggest problem with the net literature is that an awful lot of it is written and reviewed by people with no real understanding of statistics.  Consequently, you will find paper after paper whose claims and conclusions are based on what can only be characterized as a misunderstanding of the advantages and disadvantages of statistical analysis. If you are looking for a good introduction to the topic as well as a list of credible authors I’d recommend the following:
    Neural Networks in Applied Statistics – Stern – Technometrics, August 1996, Vol 38  #3 
    For what it is worth below is a quick translation of some net terms to those of regression:
                   Neural Nets                          Regression Analysis
                   Training Set                             Initial Data                Weights                                   Coefficients               Learning                                  Parameter Estimation               Optimal Brain Surgery           Model Selection/Reduction               Network                                   Statistical Model               Bias                                          Constant               Nodes                                      Sums and Transformations

    0
    #77078

    aush
    Participant

    Thanks for the info.

    0
    #77267

    Kumpar
    Participant

    I took a graduate level Neural Networks class a couple years ago.  Neural Networks is a loose interpretetation of how your brain works and the mathematical models developed are based on the way your brain processes information.  It is used for predicting the result of a particular event based on passed experiences.  Each factor can be left with the same weight or you can vary the weights.  Varying weights of each factor may be important and usually is for predicting.  There are different learning patterns (listed below) that can be used to predict the results.  Kahonen was the most common from what i remember.  I found this class to be comparable to Linear Programming.  There are many softwares out there that use the different techniques.  I think we used neuralworks.
    There is plenty of information out there on the web.  Just do a search on artificial neural networks.  I typed it into yahoo and found the below description of neural networks.  I am sure with a little bit of searching I may have found something of more detail.
    I haven’t used this in any of my industry experiences thus far, and doubt that I will be using it in the near future.  Maybe if I go back to working for a research facility.  They say that it can be used in variety of settings…..predicting weight times at airports, air traffic, stock market and investing.  LOTS OF DATA is needed to perform any worthy analysis and where I work now we are always struggling with data availability/accuracy
     
    Learning laws

    There are a variety of learning laws which are in common use. These laws are mathematical algorithms used to update the connection weights. Most of these laws are some sort of variation of the best known and oldest learning law, Hebb’s Rule. Man’s understanding of how neural processing actually works is very limited. Learning is certainly more complex than the simplification represented by the learning laws currently developed. Research into different learning functions continues as new ideas routinely show up in trade publications etc. A few of the major laws are given as an example below.

    Hebb’s RuleThe first and the best known learning rule was introduced by Donald Hebb. The description appeared in his book The organization of Behavior in 1949. This basic rule is: If a neuron receives an input from another neuron, and if both are highly active (mathematically have the same sign), the weight between the neurons should be strengthened.
    Hopfield LawThis law is similar to Hebb’s Rule with the exception that it specifies the magnitude of the strengthening or weakening. It states, “if the desired output and the input are both active or both inactive, increment the connection weight by the learning rate, otherwise decrement the weight by the learning rate.” (Most learning functions have some provision for a learning rate, or a learning constant. Usually this term is positive and between zero and one.)
    The Delta RuleThe Delta Rule is a further variation of Hebb’s Rule, and it is one of the most commonly used. This rule is based on the idea of continuously modifying the strengths of the input connections to reduce the difference (the delta) between the desired output value and the actual output of a neuron. This rule changes the connection weights in the way that minimizes the mean squared error of the network. The error is back propagated into previous layers one layer at a time. The process of back-propagating the network errors continues until the first layer is reached. The network type called Feed forward, Back-propagation derives its name from this method of computing the error term.This rule is also referred to as the Windrow-Hoff Learning Rule and the Least Mean Square Learning Rule.
    Kohonen’s Learning LawThis procedure, developed by Teuvo Kohonen, was inspired by learning in biological systems. In this procedure, the neurons compete for the opportunity to learn, or to update their weights. The processing neuron with the largest output is declared the winner and has the capability of inhibiting its competitors as well as exciting its neighbors. Only the winner is permitted output, and only the winner plus its neighbors are allowed to update their connection weights. The Kohonen rule does not require desired output. Therefor it is implemented in the unsupervised methods of learning. Kohonen has used this rule combined with the on-center/off-surround intra- layer connection (discussed earlier under 2.2.2.2) to create the self-organizing neural network, which has an unsupervised learning method.On this Internet site by Sue Becker you may see an interactive demonstration of a Kohonen network, which may give you a better understanding.
    http://www.psychology.mcmaster.ca/4i03/competitive-demo.html
    Basically, most applications of neural networks fall into the following five categories:

    PredictionUses input values to predict some output. e.g. pick the best stocks in the market, predict weather, identify people with cancer risk.
    ClassificationUse input values to determine the classification. e.g. is the input the letter A, is the blob of the video data a plane and what kind of plane is it.
    Data associationLike classification but it also recognizes data that contains errors. e.g. not only identify the characters that were scanned but identify when the scanner is not working properly.
    Data ConceptualizationAnalyze the inputs so that grouping relationships can be inferred. e.g. extract from a database the names of those most likely to by a particular product.
    Data FilteringSmooth an input signal. e.g. take the noise out of a telephone signal.

    0
Viewing 5 posts - 1 through 5 (of 5 total)

The forum ‘General’ is closed to new topics and replies.