Skip to main content

HAP-Net

Hierarchical Auto-associative Polynomial Network for Deep Learning of Complex Manifolds

Neural networks are able to model the functionality of the brain
  • Synaptic junctions are modeled as weights in a nodal system
  • Ability to associate different inputs to outputs
  • Basic architecture looked at is the nonlinear line attractor (NLA) network
Separability concepts of neuron structures
  • Neurons form lobes to provide different functions
  • Modularity can improve existing neural network architectures
Neural networks can be used to learn complex manifolds
  • Most neural networks: Feed-Forward Neural Network
  • Takes nodes from input and propagates towards the output
Feed-forward neural networks are a series of transformations
  • Cascading several nonlinear transformations can fully represent the data better than a small amount of transformations
  • Most statistical transformations are based from nonlinear transformations
  • Deep neural networks cascade several transformations together to model a complex dataset
Convolutional neural networks (CNN)
  • Use local overlapping regions to correspond to visual fields
  • Each region creates a filter convolved with a specific layer (convolutional layer)
  • Processed with a rectified linear unit (activation)
  • Pooling layer to compute max or average values for a region
  • Several layers can then be sent to classifier, like MLP network
Both deep learning networks and convolutional networks contain nonlinear mappings
  • Due to the summation of weights and inputs towards a nonlinear activation function
  • Weights are inherently linear
HAP-Net architecture
  • Construct a neural network with a polynomial weighting system
  • Incorporate multiple layers and modularity for more complex learning
  • Hierarchical auto-associative polynomial network (HAP Net) architecture to encompass deep learning, modularity, and polynomial weighting concepts
Polynomial weighting systems will provide even deeper learning capabilities
  • Polynomial neural network (PNN)
  • Multiplication of inputs to create polynomials
  • Weight set created to fix relationship of inputs and expected outputs
To achieve a more complex representation, we combine all the different features used for neural networks to create a new architecture: Hierarchical Auto-associative Polynomial Network (HAP Net)
  • Deep learning concepts through multiple layers
  • Overlapping regions and modularity from convolutional neural networks
  • Nonlinear weighting systems from polynomial neural networks

HapNet1

Hapnet2

Hapnet3

Publications

Theus H. Aspiras and Vijayan K. Asari, "Hierarchical autoassociative polynimial network (HAP Net) for pattern recognition," Neurocomputing, doi.org/10.1016/j.neucom.2016.10.002, vol. 222, pp. 1-10, January 2017.
CONTACT

Vision Lab, Dr. Vijayan Asari, Director

Kettering Laboratories
300 College Park
Dayton, Ohio 45469 - 0232
937-229-1779
Email