Machine Learning

Artificial Intelligence and Machine Learning algorithms are involved in a vast variety of scientific and industrial projects, contributing to the solution of the emerging problems, from data modelling and analysis to face recognition and self driving cars. Regardless the practical utilization of machine learning algorithms, their core problem is the construction and training of an Artificial Neural Network, which is inspired from biological neurons -from a human nervous tissue system- which the ANN attempt to simulate. Thus, an artificial neural network, is an abstract algorithmic construction, which fall within the area of computational intelligence, aiming to achieve computational simulation of the operation of biological neural networks based on rigor mathematical models, assuming layers of nodes, which are the building blocks of the network. There are three types of nodes: input, output and computational (hidden) nodes. The input nodes do not perform any calculations, just mediate between environmental inputs of network and computational nodes. The output nodes convey the environmental final numerical outputs of the network. The computational nodes multiply each entry with the corresponding synaptic weight and calculate the total sum of the products. In recent years there has been an explosion of interest in neural networks as applied successfully in an unusually wide range of fields of science and technology. Neural networks are introduced in almost any situation where a relationship between predictor variables (independent input) and predicted values (dependent, outputs) is investigated, even when this relationship is too complex, like predictions, regression, data modelling, input importance, function approximation, partial differential equations, etc. Their success stems from the underlying theory of accurate approximation of any continuous function on compact subsets.


Supervised Learningnn

  • Introduction to function approximation
  • Interpolation methods
  • Nonlinear regression
  • Logistic regression
  • Neural Networks as universal approximators
  • Training Algorithms
  • Train & Test set
  • Overfitting & Overtraining
  • Ensemble models (Cross validated, Random Forests etc.)
  • Foreward and Backward Stepwise models
  • Radial Basis kernel approximators
  • Classification algorithms

Unsupervised Learningfemopti

  • Multidimensional scaling
  • Hierarchical Trees
  • k-Means clustering
  • Clustering
  • Visualization and mapping

Quantification of features importancerelieff

  • Connection weights
  • Partial derivatives
  • Sensitivity analysis
  • Input perturbation
  • Relieff feature selection
  • Forward stepwise addition
  • Backward stepwise elimination


more info