A linear discriminant classifier can be built using the lda function and a dataframe. Here I am using the iris data set that I have divided into a training set (to build the classifier) and a testing set to validate against:
- julia> lda_mod = lda(fm, iris[train,:])
- Formula: Species ~ :(+(Sepal_Length,Sepal_Width,Petal_Length,Petal_Width))
By default, rank-reduced linear discriminant analysis is performed. This (probably) will perform a dimensionality reduction if there is more than two groups (similar to principle components analysis).
The scaling matrix is used to "sphere" or "whiten" the input data so that its sample covariance matrix is the identity matrix (this decreases the complexity of the classification computation). In other words, the whitened data has a sample covariance that corresponds to the unit n-sphere.
Note: rank reduction was successful so the scaling matrix is of rank two rather than three.
- julia> scaling(lda_mod)
4x2 Array{Float64,2}: 0.660804 0.849652 1.59634 1.81788 -1.87905 -0.978034 -2.85134 2.14334
Prediction is as simple as plugging a dataframe and the model into the predict function. The model will extract the appropriate columns from the dataframe assuming they are named correctly:
- julia> lda_pred = predict(lda_mod,iris[test,:])
- 38x1 PooledDataArray{UTF8String,Uint32,2}:
- "setosa"
- ⋮
- "virginica"
- julia> 100*sum(lda_pred .== y[test])/length(y[test])
- 100.0
Regularized linear discriminant analysis has an additional parameter gamma. This regularization is analogous to ridge regression and can be used to 'nudge' a singular matrix into a non singular matrix (or help penalize the biased estimates of the eigenvalues - see paper below). This is important when the sample size is small and the sample covariance matrix may not be invertible.
The gamma values supplied should be between 0 and 1 inclusive. The value represents the percentage of shrinkage along the diagonals of the sample covariance matrix towards its average eigenvalue.
- julia> lda_mod = lda(fm, iris[train,:], gamma=0.2)
- julia> scaling(lda_mod)
4x2 Array{Float64,2}: -0.122872 0.39509 0.554429 1.50014 -0.938699 -0.282481 -1.70349 0.797025
Rank-reduction can be disabled setting the parameter rrlda to false. Default is true. When it is disabled, we can see the scaling matrix is square:
- julia> lda_mod = lda(fm, iris[train,:], rrlda=false)
- julia> scaling(lda_mod)
4x4 Array{Float64,2}: -0.708728 0.919018 -0.970648 2.99623 -0.85916 -2.03842 -2.20533 -2.15067 -0.76332 1.29499 0.677356 -3.26619 -1.38388 -2.33625 4.27793 2.95553
Lastly, a tolerance parameter can be set and is used in determining the rank of all covariance matrices. It is relative to the largest eigenvalue of the sample covariance matrix and should be between 0 and 1.
- julia> lda_mod = lda(fm, iris[train,:], tol=0.1)
- ERROR: Rank deficiency detected with tolerance=0.1.
- in error at error.jl:21