DiscoverData Science DecodedData Science #18 - The k-nearest neighbors algorithm (1951)
Data Science #18 - The k-nearest neighbors algorithm (1951)

Data Science #18 - The k-nearest neighbors algorithm (1951)

Update: 2024-11-25
Share

Description

In the 18th episode we go over the original k-nearest neighbors algorithm;

Fix, Evelyn; Hodges, Joseph L. (1951). Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties USAF School of Aviation Medicine, Randolph Field, Texas

They introduces a nonparametric method for classifying a new observation 𝑧 z as belonging to one of two distributions, 𝐹 F or 𝐺 G, without assuming specific parametric forms.

Using 𝑘 k-nearest neighbor density estimates, the paper implements a likelihood ratio test for classification and rigorously proves the method's consistency.




The work is a precursor to the modern 𝑘 k-Nearest Neighbors (KNN) algorithm and established nonparametric approaches as viable alternatives to parametric methods.

Its focus on consistency and data-driven learning influenced many modern machine learning techniques, including kernel density estimation and decision trees.




This paper's impact on data science is significant, introducing concepts like neighborhood-based learning and flexible discrimination.




These ideas underpin algorithms widely used today in healthcare, finance, and artificial intelligence, where robust and interpretable models are critical.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Data Science #18 - The k-nearest neighbors algorithm (1951)

Data Science #18 - The k-nearest neighbors algorithm (1951)

Mike E