ALL ABOUT UNSUPERVISED LEARNING
All About Clustering : Hierarchical Clustering (Agglomerative, Divisive), Partitioned clustering (K-means, Fuzzy C-means)
In Unsupervised Learning the data has no labels.The Machine just look for the whatever pattern it can find .
Supervised Learning
Deals with the labelled data where the output patterns are known to the system.
Less complex.
Conducts offline analysis.
Comparatively more accurate and reliable results.
Include Classification and Regression. Include Classification and Regression.
Unsupervised Learning
Work with unlabelled data in which the output is just based on the collection of perception.
More complex.
Perform real time analysis.
Moderately accurate but reliable results.
Include clustering and associative rules mining problems.
Clustering :
“Clustering” is the process of grouping similar entities together. The goal of this unsupervised machine learning technique is to find similarities in the data point and group similar data points together.
Need of clustering :
- To determine the intrinsic grouping in a set unlabelled data.
- To organise data into clusters showing the internal structure of the data.
- To partition the data points.
- To understand and extract value from large sets of structured and unstructured data.
Types of Clustering :
- Hierarchical Clustering : A tree structure that has a set of nested clusters. These are of two types. a.) Agglomerative b.)Divisive
- Partitioned clustering : A division of the set of data objects into non – overlapping sets or clusters such that every data objects is in just one subset. These are of two types a.) K-means b.)Fuzzy C-means.
Agglomerative Clustering :
In agglomerative or bottom-up clustering method we assign each observation to its own cluster. Then, compute the similarity (e.g., distance) between each of the clusters and join the two most similar clusters. Finally, repeat steps 2 and 3 until there is only a single cluster left.
Divisive Clustering :
In divisive or top-down clustering method we assign all of the observations to a single cluster and then partition the cluster to two least similar clusters. Finally, we proceed recursively on each cluster until there is one cluster for each observation. There is evidence that divisive algorithms produce more accurate hierarchies than agglomerative algorithms in some circumstances but is conceptually more complex.
Working : Hierarchical clustering
- Assign each item to its own cluster,such that if you have N number of items,you now have N number of clusters.
- Find the closest (most similar) pair of clusters and merge them into a single cluster.Now you have one cluster less.
- Compute distances (similarities) between the new cluster and every old cluster.
- Repeat steps two and tree until all items are clustered into a single cluster of size N.
Distance Measures :
- Complete – Linkage Clustering : Find the maximum possible distance between points belonging to two different clusters.
- Single-Linkage Clustering : Find the minimum possible distance between points belonging to two different clusters.
- Mean – Linkage Clustering : Find all possible pair-wise distances for points belonging to two different clusters and then calculate the average.
- Centroid -Linkage Clustering : Find the centroids of each cluster and. calculate the distance between them.
K-Means Algorithm : A iterative clustering algorithm whose goal is to find maxima in each iteration.
Steps :
- Specify the desired number of clusters K
- Randomly assign each data point to a cluster
- Compute cluster centroids
- Reassign each point to the closet cluster centroid and recompute cluster centroids.
Optimal numbers of clusters :
It is the fundamental issue in k-mean clustering.
- If you plot k against the SSE,you will see that the report decreases as K increases.
- This is because their size decreases and hence distortion is also smaller.
- The goal of the elbow method is to choose k where SSE decreases abruptly.
Logistics Regression :
Logistics Regression use to predict binary outcomes for a given set of independent variable. The dependant variable’s outcome is discrete, such. That y belongs to {0,1}. A binary dependent variable can have only values such as 0 or 1 , win or lose ,pass or fail , healthy or sick .
Sigmoid Function Equation
Sigmoid Function – The probability in logistic regression s represented by the sigmoid function (logistic function or the S- Curve )
EXTRA :
Hi everyone,
Hope you all are doing great,
I was going through the internet and found that people were demanding ‘Unsupervised Learning with (Machine Learning) python’ so I created a series on Machine Learning including ‘Unsupervised Learning EXPLANATION and CODING’. If you are into video learning I am also providing a link to the video. Hope you like it.
Video link : https://bit.ly/3AeLxY8