Question for Artificial neural network (ANN).....

Thread Starter

RRITESH KAKKAR

Joined Jun 29, 2010
2,829
HI,

I want to know about the working or u can say why we go for neural network as we already have lots of uC to do our work??
as i have seen there are feed forward and backward with lot of activation function...
what does they mean in real??
 

Wendy

Joined Mar 24, 2008
23,421
A neural network is an alternate computational system. It is nothing like binary, or the Von Neumann systems we currently use. They can be simulated, and even build with electronics. Their fundamental unit is not the bit, but the neuron.

We may be able to create intelligence with bits and bytes, but this is not proven. We have working example of intelligence with neural nets (us). This alone makes them worth studying.
 

vpoko

Joined Jan 5, 2012
267
We may be able to create intelligence with bits and bytes, but this is not proven. We have working example of intelligence with neural nets (us). This alone makes them worth studying.
I have to take issue with this paragraph. First, we don't even have a formal definition for intelligence. And second, the Church-Turing thesis tells us that any computational system (as far as we know, even the human brain) can be effectively simulated by a Turing machine (and hence any other universal computational system like a cellular automata). This is trivially provable* because a computer could simulate, neuron by neuron, the working of a neural net.

Neural networks may well provide advantages in terms of conciseness (the amount of structure required to solve certain problems vs the amount of code that would be required to solve it on a sequential computer) and efficiency (the amount of time it would take to solve a problem on a neural net versus the amount required on a sequential computer, asymptotically speaking) but no one has ever seriously alleged that neural nets have unique capabilities in terms of what problems they can solve.

*Edit: I didn't like my use of the word "provable" since that implies a mathematical proof while the Church-Turing thesis is a non-mathematical statement (hence a thesis instead of a theorem). Substitute "trivially provable" with "easy to imagine".
 
Last edited:

vpoko

Joined Jan 5, 2012
267
Easy nuff to prove, I have a working model behind a keyboard, if naturally created.

Show me a digital version.
The problem is you can't prove to anyone else that you're truly "intelligent", instead of just a very clever simulation of intelligence (a very well-written, meat-based chatbot, if you will). I have the same problem. I think I'm (somewhat) intelligent, but others can only observe my external behavior, not my internal processes. For all you know, my neurons may just be running an extremely large program made of if-then statements that responds to every conceivable stimulus (see "Chinese room problem"). For better or worse, this is a huge, open problem in the philosophy of mind.

Interestingly enough, if we limit our discussion to man-made examples (for all we know, maybe humans have "souls" which can't be replicated by any process available to us; I'm not saying I believe that's the case, only pointing out how hard it is to pin down the difference between us and machines), the most advanced systems we've seen at natural problem solving have all been processor based. IBM's Watson, for example. Neural nets may get more traction but right now they're only useful for very specific problems, not simulating general intelligence (as imprecisely defined as it is).
 
Last edited:

Wendy

Joined Mar 24, 2008
23,421
That argument is sophism, and doesn't work well in any discussion. I hate philosophy.

There are other computational systems out there, though I couldn't name one at the moment.
 
Last edited:

vpoko

Joined Jan 5, 2012
267
That argument is sophism, and doesn't work well in any discussion. I hate philosophy.
I'm afraid you opened that can of worms when you used humans as a canonical example of intelligence and then extrapolated that to assume that neural nets were capable of it while sequential processors weren't.

Putting on my computer science hat, the Church-Turing thesis tells us that any universal computational system is capable of effectively simulating any other universal computational system. Further, quantum computers show us that it may be the case that such simulations require exponential time, but that's still an open problem.

I admit that the argument in my last post seems trite at first glance, but there's a very good reason for making it. Of course you know that you're intelligent, and you can reasonably extrapolate that to other human beings, who are made out of the same "stuff" as you - likely even to animals. But, without formally defining intelligence, you now have a bias that intelligence requires human-like properties. Maybe the necessary spark is a neural net. Maybe it's carbon. Maybe it's a soul. But without an actual, workable definition of intelligence, it's impossible to tell what's intelligent and what isn't among computational systems that don't look and act like human beings.

If you can avoid the philosophical questions but can show that neural nets have additional capabilities that can't be simulated by a sequential processor, you should line up for your Turing award, Godel prize, and just about every other award in computer science, because you'd be the first person to ever show a successful counterargument to the Church-Turing thesis.

To the OP: I'm sorry, I don't know the answer to your question.
 

THE_RB

Joined Feb 11, 2008
5,438
... why we go for neural network as we already have lots of uC to do our work??
...
because weak minded humans want to try to make superior silicon devices as "fuzzy" and defective as organic minds.

The great future breakthroughs in artifical intelligence will work WITH the high speed sequential (faultless) nature of silicon processing, not try to turn it into a fuzzy linked neuronic mess.

You don't evolve a Ferrari into the highest possible form of land transportation by trying to design legs for it! One day the AI guys will start to realise the things that are fairly obvious to a child, computers aren't organic nor should they try to be.
 

Wendy

Joined Mar 24, 2008
23,421
Unfortunately, very few things are yes/no, just many shades of maybe. As far as intelligence goes, my feeling is we lack fundamental theories. If we had a clue it might even be easy.
 

THE_RB

Joined Feb 11, 2008
5,438
... As far as intelligence goes, my feeling is we lack fundamental theories. ...
Agreed! I define intelligence as work/time so if comes to calculating square roots my $2 calculator is much more intelligent than me. But I'm more intelligent than it in some other tasks. ;)

It's happening, and will continue happening and accelerate. Probably as much without our meddling as with it.
 

bwack

Joined Nov 15, 2011
113
What is the use of using different function for activation??
Whithout the activation function in an ANN, the output(s) is linear dependent to the inputs, the ANN can then only approximate a linear function (never useful), but with a nonlinear activation function at the output(s) of the network, the ANN is now able to approxamite a nonlinear function. Some activation functions are more optimal than others when using it for a specific problem.

So what is a nonlinear function you may ask. Truth tables for logic circuits is one example, but not very useful for ANNs because we allready know the table (!), right, but imagine this table (of input and output patterns) to be tremendously large and arbitrary, lets say something silly like 10^14 * entries, you can't find/use a algebraic expression, you know there must exist a generalization of the problem however how complex it may seem, and the output does not have to be EXACT - then the ANN can be useful for the application. Oh wait it must also be possible to train this network too in a practical way.
* I used 10^14 entries as an example because if you apply ANN in the decision making when playing the game of backgammon, if I don't recall false, there are 10^14 possible patterns in the game, and each pattern (the position in the game) has a calculable probability value of winning the game. So if you roll your dices, you know what possible positions you can draw, and you present all those positions one by one to your ANN and then you pick the position that gives the highest probability to win the game. If you had to calculate all these values by exhaustive search, it would take loooooooooooong time, but with the ANN it is almost instant. Read more about it on the net. It is an interesting use of ANNs. There are other uses though, like object identification (like classifying letters and numbers), ac motor control, and other.

I think the beauty of ANN is that a network that has been trained well is able to give a useful output from a given input pattern even if it has never seen the input pattern before durring training. We can then say that the ANN has generalized the problem. Think about the ANN as a polynomal function set to mimick another function. If the polynomal function fits well over the mimicked function, then you can put any value (within limits) on the input and get a useful output (!).
 
Last edited:

Wendy

Joined Mar 24, 2008
23,421
Hi,

Why we make logical gate in ann like NOT, AND and OR?..as these can be done easily in chip..
I don't know how much study you have done, but the math came first (Boolean Algebra). A gentleman named Babbage realized the math could be implemented with practical consequences in the real world, and tried to build the first real computer (which he called the difference engine) using gears and mechanics.

This was long before electricity. Unfortunately, machining was not up to the required tolerances either. Experts say the plans would have worked though, and his 3 incomplete prototypes are scattered through out Britain (I think). There has been some talk finishing one (or making one from scratch).

Babbage's descriptions led to the early computers built with tubes. ENIAC had over 19,000 tubes (valves if your British). When BJT transistors came out smaller, more practical computers came out, without using ICs, in the 1950's and 60's. Computers were still large though, war and cold war helped drive their development.

As I have said, there are other systems that can calculate, neural nets being one. We have gone far with this system though. Theory is critical, both Babbage and Boolean contributed a lot on theory for digital.

As I said, no real theory for intelligence exists as far as I know. Lots of work is being done on it though.
 
Last edited:

kubeek

Joined Sep 20, 2005
5,795
...any computational system ... can be effectively simulated by a Turing machine...
I have to disagree, first, I dont really think you can call the brain a computational system, but secondly even if you call it that, no turing machine can ever simulate an analog signal or function, because a turing machine is a discrete engine and can never calculate with continuous numbers. A touring machine could aproximate a neuron, but never truly emulate it; it simply lives in a completely different world.
 

vpoko

Joined Jan 5, 2012
267
I have to disagree, first, I dont really think you can call the brain a computational system, but secondly even if you call it that, no turing machine can ever simulate an analog signal or function, because a turing machine is a discrete engine and can never calculate with continuous numbers. A touring machine could aproximate a neuron, but never truly emulate it; it simply lives in a completely different world.
There is no such thing as a purely analog value in the physical world. All of your analog values are, at the very least, limited by the precision of your measurements (and once you get down to very small scales and energies, the Planck constant). Once you allow for that, you can simulate them using digital logic.

It's an open problem, but I believe that all directly-observable values are discrete once you get to a fine-grained level. The only truly continuous values are not directly observable, like the amplitudes of the quantum wave function. Truly analog, directly-observable values would allow you to store an infinite amount of information in a finite amount of space. I can't prove that it's impossible, but I would find that very hard to accept.
 
Last edited:
Top