TECHNICAL PHD SEMINAR SERIES

TECHNICAL PHD SEMINAR SERIES

MACHINE LEARNING AND DATA STREAMS

A new and orthogonal view to the disciplinary research

Machine Learning together with data streams offer a new and universal way of looking at the world phenomena, which is radically different than classical disciplinary and theory based approaches. Opposite to theory driven approaches, machine learning, which is a relatively new field of research inverts the process of scientific modeling. By asking a proper question and having a lot of observations around that question, machine learning promises to learn good answers from the provided data sets. Therefore, the trained models have a larger capacity to deal with unique and complex problems that have no a priori descriptive theories. With this conceptual turn, within the last few years we have experienced an incredible progress in very complex application domains such as natural language modeling, which led to automatic language translations between many spoken languages without a need to directly encode the semantic or syntactical rules of those languages. Also, there are several success stories in computer vision and specially the new field of deep learning, which have led to successful projects such as self-driving cars.
However, while new machine learning algorithms such as deep learning have created a new wave of applications in computer vision and natural language translation, there are a lot of complex applications, yet to be investigated.
Similar to the way engineering students need to learn calculus and differential equations or similar to the way architects learn about drawing with pencil and paper, now students and researchers need to get literate in these new fields. In this technical seminar series, our objectives are two folds. While we discuss many mathematical and computational techniques for data driven modeling, we present a wide variety of different applications from different domains. It is expected that graduate researchers will learn to approach their own problems with these new style of thinking.

Dates: Tuesdays 14:00-16:00

Introduction: Tuesday, October 3, 2017

Place: Chair for CAAD, D-ARCH/ITA/CAAD HIB E16

Course tutor: Vahid Moosavi

Github Code Repository

orthogonal

COURSE MATERIALS

Chapter 1: Determinism and scientific modeling

  • History of determinism
  • Newtonian mechanics and rapid growth of science
  • To believe in objective truth
  • Determinism and the belief in equations
  • Laplace’s demon
  • Limits to determinism
  • Non-predictable determinism and chaotic systems

Lecture notes

Chapter 2: Probability and statistics (Introduction)

  • Uncertainty and randomness
  • Principles of probability theory
  • Random variable and probability distributions
  • Functions of random variables
  • Law of large numbers
  • Central limit theorem
  • Statistical measures
  • Expected value (average)
  • Variance
  • Median
  • Mode
  • Momentums and higher order statistics
  • Skewness, tails and kurtosis
  • Covariance
  • Scatter matrix
  • Correlation

Lecture notes

Chapter 3: Kernel density estimation and least squares method

  • Density learning rather than known distributions
  • Meta-parameters in machine learning
  • Gaussian kernel density estimation
  • Regression problem
  • Least squares method
  • Linear regression
  • Polynomial regression
  • Local polynomial regression

Lecture notes

Chapter 4: Linear transformations: Principal Component Analysis (PCA)

  • Linear algebra and matrix operations (introduction)
  • Linear space transformations
  • Principal Component Analysis (PCA)
  • Extensions to PCA
  • Sparse Coding (SC)

Lecture notes

Chapter 5: Data clustering

  • K-Means
  • Limits and Extensions
  • Probabilistic clustering: Gaussian mixture models
  • Density based clustering
  • DBSCAN
  • Fundamental limits to classical notion of clusters and community detection
  • Clustering as feature learning in comparison to PCA and sparse coding
  • Clustering as space indexing

Lecture notes

Chapter 6: Self Organizing Maps (SOM)

  • Vector quantization
  • Manifold learning
  • Topology preserving dimensionality reduction
  • Dimensionality reduction and nonlinear space transformation
  • Visualization of high dimensional spaces
  • Clustering and classification with SOM
  • Function approximation
  • Transfer function: Multidirectional function approximation
  • Non-parametric probability density estimation
  • Filling missing values
  • Resampling
  • Data reduction
  • Contextual Numbers

Lecture notes

Chapter 7: Markov chains and dynamical systems

  • History of Markov chains
  • Random walks
  • Transition matrix and linear operators
  • Typology of Markov chains
  • Steady state probabilities and PageRank
  • State space explosion
  • Relations of Markov chains and Markov Decision Process (MDP)
  • Mixing times
  • Recurrence time
  • Mean first passage time
  • Kemeny Constant
  • Sensitivity analysis of the transition matrix
  • Applications to economic networks, traffic and text

Lecture notes

Chapter 8: Markov chains, relational representation and natural language models

  • Rational representation: Representation of objects based on a priori given features
  • Relational representation: Representation of objects based on their contexts
  • Abstract and concrete universals
  • Markov chains, probabilistic networks and co-occurrence matrix
  • Neural embedding for network based representations
  • Neuro-probabilistic models of the language
  • Word2Vec and Doc2Vec models
  • Applied natural language modeling problems

Lecture notes

Chapter 9: Deep neural networks (Auto-encoders)

  • Representation learning
  • Local representation (usually in manifold learning) of objects
  • Distributed representation of objects
  • Data compression
  • Stacking PCA layers
  • Multi-Layer Perceptron (MLP) networks
  • Architectural diversity of deep networks
  • Tensorflow

Lecture notes

Chapter 10: Ensemble models: When the randomization is a resource.

  • Meta-Level parameters for machine learning algorithms
  • Randomization as a resource
  • Bootstrap aggregating, also called bagging (Independent learners)
  • Boosting (Sequentially improving learners)
  • Decision trees
  • Ensemble of weak learners: Random forests and extra trees
  • Supervised and unsupervised ensembles

Lecture notes

Chapter 12: Data driven modeling in practice

  • Data collections and APIs
  • Data wrangling
  • Bias variance tradeoff
  • Over and under fitting
  • Grid search and cross validation
  • Out of sample validation
  • Measures of performance
  • Precision/ recall
  • Machine learning as a service
  • Deploying the model on a server
  • Serving trained models on a server

Lecture notes

Chapter 13: Fourier analysis for signal processing

  • Signal processing for data driven modeling
  • Translation invariance
  • Rotation invariance
  • Scale invariance
  • Deformation
  • Harmonic Analysis, Fourier series and idealized filters
  • Fourier transform in 1D and 2D signals (e.g. sounds and images)
  • Limits of feature engineering in complex environments

Lecture notes

Chapter 14: (Deep learning) Convolutional neural nets

  • Feature engineering and feature (representation) learning
  • Convolution and kernels in mathematics
  • Convolutional Neural Networks (CNN)
  • Hierarchical representation and compositionality
  • CNN for supervised and unsupervised tasks
  • CNN for image processing
  • CNN for non-Euclidean and graphical data

Lecture notes

Chapter 15: (Deep Learning) Recurrent neural nets

  • Feed forward networks vs. recurrent networks
  • Sequential data analysis
  • Long Short-Term Memory (LSTM) networks
  • Recurrent neural nets as universal Turing machines
  • Range of applications

Lecture notes

Chapter 16: (Deep Learning) Recurrent neural nets and dynamical systems

  • Applications to dynamical systems
  • Chaotic systems
  • Time series forecasting
  • Text generation

Lecture notes

Chapter 17: Introduction to reinforcement learning and agent based learning

  • Reinforcement Learning as the problem of agent based learning
  • Markov Decision Process (MDP)
  • Main approaches and architectures in reinforcement learning
  • Applications of reinforcement learning for planning and control
  • Deep reinforcement learning

Lecture notes

DATA DRIVEN APPLICATIONS

  • Design space exploration
    • Structural engineering
    • Computational drug design and chemistry
  • Exploring cities with data streams
    • Urban morphology and spatial analysis using network analysis and machine learning techniques
    • Geo-mapping with high dimensional data sets
    • City pattern mining: How to read and explore cities at the age of Big Data
    • Unconventional data streams: Location based social networks including Twits, texts, images Google street view images, Wikipedia entries and web crawling as new sensory systems in cities
  • Data driven simulation
    • Learning physics and model reduction next to fluid dynamics: Water flow, wind flow and air pollution at different scales
    • Urban traffic modeling with mobile phone data or GPS traces
  • Urban economy and real estate market
  • Time series forecasting in financial markets
  • Networks and systemic risk
    • Energy networks
    • Flight networks
    • Economic networks
  • Dynamical systems and control
    • Dynamic resource allocation: CPUs, clouds, manufacturing job shops
    • Smart home (e.g. data driven learning of comfort for the users)
    • Dynamic pricing and planning: (e.g. tolls, lights, tickets)
  • Natural language modeling applications
    • Sentiment analysis
    • Topic modeling
    • Smart news
  • Atmospheric science geophysical applications
    • Tele connection
    • Effect of cyclone on ice core accumulation
  • Image processing and remote sensing applications
    • Detecting informal settlements from satellite images
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s