This course provides an overview of the fundamentals of machine learning in modern Intel® architecture. Topics covered include:

- Review the types of problems that can be solved.
- Understand the building blocks
- Learn the fundamentals of model building in machine learning
- Exploring key algorithms

At the end of this course, students will have practical knowledge of:

- Supervised learning algorithms
- Key concepts such as under- and over-fit, regularization and cross validation
- How to identify the type of problem to solve, choose the correct algorithm, tune parameters and validate a model

The course is structured around 12 weeks of lectures and exercises. Each week requires three hours to complete. The exercises are implemented in Python *, so it is recommended to become familiar with the language (can learn along the way).

# Prerequisites

- Python programming
- Calculation
- Algebra lineal
- Statistics

## Course

Below you can download the lessons corresponding to each week of the course. Just click on the download link.

**Week 1**

This class introduces the basic data science toolkit:

- Jupyter * Notebook for interactive coding
- NumPy, SciPy and pandas for numerical calculation
- Matplotlib and seaborn for data visualization
- Scikit-learn * for machine learning libraries

You will use these tools to work through the exercises each week.

**Week 2**

This class introduces the basics and vocabulary of machine learning:

- Supervised learning and how it can be applied to regression and classification problems.
- K-Nearest Neighbor algorithm (KNN) for classification

**Week 3**

This class reviews the principles of generalization of the central model:

- The difference between overfitting and underfitting a model
- Bias-variance compensation
- Finding the optimal divisions of the test and training data set, cross-validation and complexity of the model in the face of error
- Introduction to the linear regression model for supervised learning

**Week 4**

This class builds on concepts taught in previous weeks. Besides you:

- Learn about cost functions, regularization, function selection and hyperparameters
- Understand more complex statistical optimization algorithms such as gradient descent and its application to linear regression.

**Week 5**

This class discusses the following:

- Logistic regression and how it differs from linear regression
- Metrics for misclassification and scenarios in which they can be used

**Week 6**

During this session, we revise:

- The fundamentals of probability theory and its application to the Naïve Bayes classifier
- The different types of Naïve Bayes classifiers and how to train a model using this algorithm

**Week 7**

This week covers:

- Supports vector machines (SVM): a popular algorithm used for classification problems.
- Examples to learn the similarity of SVM to logistic regression
- How to Calculate the SVM Cost Function
- Regularization in SVM and some tips for obtaining non-linear classifications with SVM

**Week 8**

Continuing the topic of advanced supervised learning algorithms, this class covers:

- Decision trees and how to use them for classification problems
- How to identify the best division and factors for division
- Strengths and weaknesses of decision trees
- Regression trees that help classify continuous values

**Week 9**

Continuing with what was learned in the Week 8, this class teaches:

- The concepts of bootstrapping and aggregation (commonly known as “bagged”) to reduce the variance
- The Random Forest algorithm that further reduces the correlation observed in bagging models.

**Week 10**

This week, learn about the momentum algorithm that helps reduce variance and bias.

**Week 11**

Until now, The course has largely focused on supervised learning algorithms. This week, learn about unsupervised learning algorithms and how they can be applied to clustering and dimensionality reduction problems.

**Week 12**

Dimensionality refers to the number of features in the data set. Theoretically, more features should mean better models, but this is not true in practice. Too many functions can result in false correlations, more noise and slower performance. This week, learn algorithms that can be used to achieve a reduction in dimensionality, What:

- Principal component analysis (PCA)
- Multidimensional scale (MDS)

- Udemy Free: Development course of a CRUD application with PHP and MySQL - 20 April, 2021
- Udemy Coupon: Artificial Vision Course with Python and OpenCV with 100% off - 20 April, 2021
- Udemy Free: Android Studio Spanish course - 20 April, 2021

Excellent, I am very interested in principal component analysis, I did it with R now I'll check it with Python

Excellent Material, well didactic and easy to learn Machine Learning