Thursday 14 May 2020

Neural Network Topologies



Network Topology

A network topology is the arrangement of a network along with its nodes and connecting lines. According to the topology, ANN can be classified as the following kinds −

Feed forward Network


It is a non-recurrent network having processing units/nodes in layers and all the nodes in a layer are connected with the nodes of the previous layers. The connection has different weights upon them. There is no feedback loop means the signal can only flow in one direction, from input to output. It may be divided into the following two types −


Single layer feed forward network − The concept is of feed forward ANN having only one weighted layer. In other words, we can say the input layer is fully connected to the output layer.
Multi layer feed forward network − The concept is of feed forward ANN having more than one weighted layer. As this network has one or more layers between the input and the output layer, it is called hidden layers.


Feedback Network

As the name suggests, a feedback network has feedback paths, which means the signal can flow in both directions using loops. This makes it a non-linear dynamic system, which changes continuously until it reaches a state of equilibrium. It may be divided into the following types −

  • Recurrent networks − They are feedback networks with closed loops. Following are the two types of recurrent networks.
  • Fully recurrent network − It is the simplest neural network architecture because all nodes are connected to all other nodes and each node works as both input and output.

Jordan network − It is a closed loop network in which the output will go to the input again as feedback as shown in the following diagram.
The above material is collected from https://www.tutorialspoint.com/
Below are my class notes for the same topic :   

Neural Network Topologies: 

Artificial Neural Network are only useful when the processing units are organized in suitable manner to accomplish a given pattern recognition task.

The arrangement of the processing units, connection, and patterns input / output referred to as topology.

Artificial Neural Networks are normally organized into layers of processing units. The units of a layer are similar in the sense that they all have similar activation dynamics and output function.

Connections can be made either from units of one layer to units of another layer (inter layer  connection)  or both  inter layer and intra layer connections.

Further, the connections across the layers and among units within a layer can be organized either in a feed forward manner or feed backward manner.  In feed backward network the same processing unit may be visited more than once.
 Let us consider two layers F1 and F2 with M & N processing units respectively.
By providing connections to the jth unit of F2 layer from all the units of F1 layer we get two network structures in-star and out-star which have fan-in and fan-out structures respectively.

 

If all the connections from the units in F1 and F2 are made, we obtain hetero-association network. This can be viewed as group of in stars if the flow is from F1 to F2 and group of out-stars if flow is from F2 to F1.



When the flow is bidirectional, we get bidirectional associative memory where either of the layer can be used as Input/ Output.
If two Layers F1 and F2 coincide and the weights are symmetric i.e. Wji= Wij , i≠j then we obtain auto associative memory in which unit is connected to every other unit and to itself.


No comments:

Post a Comment