楼主: Lisrelchen
1792 11

[专题汇总]模式识别 [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4192份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50278 个
通用积分
83.5106
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2016-3-22 08:37:14 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:模式识别 introduction Intelligence discriminant Applications 模式识别

本帖被以下文库推荐

沙发
Lisrelchen 发表于 2016-3-22 10:32:36
  1. % Example 2.2.1
  2. % "Introduction to Pattern Recognition: A MATLAB Approach"
  3. % S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras

  4. close('all');
  5. clear

  6. rand('seed',0);

  7. % Generate the dataset X1 as well as the vector containing the class labels of
  8. % the points in X1
  9. N=[100 100]; % 100 vectors per class
  10. l=2; % Dimensionality of the input space

  11. x=[3 3]';
  12. % x=[2 2]'; for X2
  13. % x=[0 2]'; for X3
  14. % x=[1 1]'; for X4

  15. X1=[2*rand(l,N(1)) 2*rand(l,N(2))+x*ones(1,N(2))];
  16. X1=[X1; ones(1,sum(N))];
  17. y1=[-ones(1,N(1)) ones(1,N(2))];

  18. % 1. Plot X1, where points of different classes are denoted by different colors,
  19. figure(1), plot(X1(1,y1==1),X1(2,y1==1),'bo',...
  20. X1(1,y1==-1),X1(2,y1==-1),'r.')
  21. figure(1), axis equal

  22. % 2. Run the perceptron algorithm for X1 with learning parameter 0.01
  23. rho=0.01; % Learning rate
  24. w_ini=[1 1 -0.5]';
  25. [w,iter,mis_clas]=perce(X1,y1,w_ini,rho)
复制代码

藤椅
Lisrelchen 发表于 2016-3-22 10:33:16
  1. % Example 2.2.2
  2. % "Introduction to Pattern Recognition: A MATLAB Approach"
  3. % S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras

  4. close('all');
  5. clear

  6. rand('seed',0);

  7. % Generate the dataset X1 as well as the vector containing the class labels of
  8. % the points in X1
  9. N=[100 100]; % 100 vectors per class
  10. l=2; % Dimensionality of the input space

  11. x=[3 3]';
  12. % x=[2 2]'; for X2
  13. % x=[0 2]'; for X3
  14. % x=[1 1]'; for X4

  15. X1=[2*rand(l,N(1)) 2*rand(l,N(2))+x*ones(1,N(2))];
  16. X1=[X1; ones(1,sum(N))];
  17. y1=[-ones(1,N(1)) ones(1,N(2))];

  18. % 1. Plot X1, where points of different classes are denoted by different
  19. % colors
  20. figure(1), plot(X1(1,y1==1),X1(2,y1==1),'bo',...
  21. X1(1,y1==-1),X1(2,y1==-1),'r.')
  22. figure(1), axis equal

  23. % 2. Run the perceptron algorithm for X1 with learning parameter 0.01
  24. rho=0.01; % Learning rate
  25. %rho=0.05
  26. w_ini=[1 1 -0.5]';
  27. [w,iter,mis_clas]=perce_online(X1,y1,w_ini,rho)
复制代码

板凳
Lisrelchen 发表于 2016-3-22 10:34:17
  1. % Example 2.3.1
  2. % "Introduction to Pattern Recognition: A MATLAB Approach"
  3. % S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras

  4. close('all');
  5. clear;

  6. % 1.
  7. m(:,1)=[0 0 0 0 0]';
  8. m(:,2)=[1 1 1 1 1]';
  9. S=[.9 .3 .2 .05 .02;
  10.      .3 .8 .1 .2 .05;
  11.      .2 .1 .7 .015 .07;
  12.      .05 .2 .015 .8 .01;
  13.      .02 .05 .07 .01 .75];
  14. P=[1/2 1/2];

  15. % Generate X1 and the required class labels
  16. N1=200;
  17. randn('seed',0)
  18. X1=[mvnrnd(m(:,1),S,fix(N1/2)); mvnrnd(m(:,2),S,N1-fix(N1/2))]';
  19. z1=[ones(1,fix(N1/2)) 2*ones(1,N1-fix(N1/2))];

  20. % Generate X2 and the required class labels
  21. N2=200;
  22. % N2=10000 % for X3
  23. % N2=100000 % for X4
  24. randn('seed',100)
  25. X2=[mvnrnd(m(:,1),S,fix(N2/2)); mvnrnd(m(:,2),S,N2-fix(N2/2))]';
  26. z2=[ones(1,fix(N2/2)) 2*ones(1,N2-fix(N2/2))];

  27. % Compute the Bayesian classification error based on X2
  28. S_true(:,:,1)=S;
  29. S_true(:,:,2)=S;
  30. [z]=bayes_classifier(m,S_true,P,X2);
  31. err_Bayes_true=sum(z~=z2)/sum(N2)


  32. % 2. Augment the data vectors of X1
  33. X1=[X1; ones(1,sum(N1))];
  34. y1=2*z1-3;

  35. % Augment the data vectors of X2
  36. X2=[X2; ones(1,sum(N2))];
  37. y2=2*z2-3;

  38. % Compute the classification error of the LS classifier based on X2
  39. [w]=SSErr(X1,y1,0);
  40. SSE_out=2*(w'*X2>0)-1;
  41. err_SSE=sum(SSE_out.*y2<0)/sum(N2)
复制代码

报纸
Lisrelchen 发表于 2016-3-22 10:35:02
  1. % Example 2.3.2
  2. % "Introduction to Pattern Recognition: A MATLAB Approach"
  3. % S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras

  4. close('all');
  5. clear;

  6. m=[0 0 0 0 0; 2 2 0 2 2]';
  7. S=[1 0 0 0 0; 0 1 0 0 0; 0 0 10^(-350) 0 0; 0 0 0 1 0; 0 0 0 0 1];
  8. [l,l]=size(S);

  9. % Generate X1
  10. N1=1000;
  11. randn('seed',0)
  12. X1=[mvnrnd(m(:,1),S,fix(N1/2)); mvnrnd(m(:,2),S,N1-fix(N1/2))]';
  13. X1=[X1; ones(1,N1)];
  14. y1=[ones(1,fix(N1/2)) -ones(1,N1-fix(N1/2))];

  15. % Generate X2
  16. N2=10000;
  17. randn('seed',100)
  18. X2=[mvnrnd(m(:,1),S,fix(N2/2)); mvnrnd(m(:,2),S,N2-fix(N2/2))]';
  19. X2=[X2; ones(1,N2)];
  20. y2=[ones(1,fix(N2/2)) -ones(1,N2-fix(N2/2))];

  21. % 1. Compute the condition number of X1*X1'and the solution vector, w, for
  22. % the original version of the LS classifier
  23. cond_num=cond(X1*X1')
  24. w=SSErr(X1,y1,0)
  25. % Note: cond_num=1.4767e+017 and w is a vector of NaN (Not-A-Number)

  26. % 2. Repeat step 1 for the regularized version of the LS classifier
  27. C=0.1;
  28. cond_num=cond(X1*X1'+C*eye(l+1))
  29. w=SSErr(X1,y1,C)

  30. % 4. Compute the classification error on X2 for the gicen w
  31. SSE_out=2*(w'*X2>0)-1;
  32. err_SSE=sum(SSE_out.*y2<0)/N2
复制代码

地板
Lisrelchen 发表于 2016-3-23 08:30:55
  1. %
  2. % Written by:
  3. % --
  4. % John L. Weatherwax                2007-07-01
  5. %
  6. % email: wax@alum.mit.edu
  7. %
  8. % Please send comments and especially bug reports to the
  9. % above email address.
  10. %
  11. %-----

  12. close all; clc; clear;

  13. mu1 = [ 0.1, 0.1 ].';
  14. mu2 = [ 2.1, 1.9 ].';
  15. mu3 = [ -1.5, 2.0 ].';

  16. S = [ [ 1.2, 0.4 ]; [ 0.4, 1.8 ] ];
  17. w1 = S \ mu1;
  18. w2 = S \ mu2;
  19. w3 = S \ mu3;

  20. P = 1/3;
  21. w10 = log(1/3) - 0.5 * (mu1.') * w1;
  22. w20 = log(1/3) - 0.5 * (mu2.') * w2;
  23. w30 = log(1/3) - 0.5 * (mu3.') * w3;

  24. x = [ 2.1; 1.9 ];

  25. % Compute the three discriminant values:
  26. %
  27. g1 = (w1.')*x + w10;
  28. g2 = (w2.')*x + w20;
  29. g3 = (w3.')*x + w30;

  30. [m, class] = max( [g1, g2, g3] );
  31. http://waxworksmath.com/Authors/N_Z/Theodoridis/Code/Chapter2/chap_2_prob_7.m
复制代码

7
Lisrelchen 发表于 2016-3-23 08:31:59
  1. %
  2. % Written by:
  3. % --
  4. % John L. Weatherwax                2007-07-01
  5. %
  6. % email: wax@alum.mit.edu
  7. %
  8. % Please send comments and especially bug reports to the
  9. % above email address.
  10. %
  11. %-----

  12. close all; clc; clear;

  13. S = [ [ 0.3, 0.1, 0.1 ]; [ 0.1, 0.3, -0.1 ]; [ 0.1, -0.1, 0.3] ];
  14. mu1 = [ 0; 0; 0 ];
  15. mu2 = [ 0.5; 0.5; 0.5 ];
  16. mu_diff = mu1 - mu2;
  17. w = S \ mu_diff;

  18. P1 = 0.3;
  19. P2 = 1-P1;

  20. norm_mu_diff = sqrt( mu_diff.' * ( S \ mu_diff ) );
  21. x0 = 0.5 * mu_diff - log( P1 / P2 ) * ( mu_diff / norm_mu_diff^2 );
复制代码





http://waxworksmath.com/Authors/N_Z/Theodoridis/Code/Chapter2/chap_2_prob_7.m

8
Lisrelchen 发表于 2016-3-23 08:33:56
  1. %
  2. close all; clc; clear;

  3. rand('seed',0);
  4. randn('seed',0);

  5. mu1 = [ 1, 1 ].';
  6. mu2 = [ 1.5, 1.5 ].';
  7. sigmasSquared = 0.2;
  8. d = size(mu1,1);

  9. nFeats = 10000;

  10. X1 = mvnrnd( mu1, sigmasSquared*eye(d), nFeats );
  11. X2 = mvnrnd( mu2, sigmasSquared*eye(d), nFeats );

  12. h1 = plot( X1(:,1), X1(:,2), '.b' ); hold on;
  13. h2 = plot( X2(:,1), X2(:,2), '.r' ); hold on;
  14. legend( [h1,h2], {'class 1', 'class 2'} );

  15. mean_diff = mu1 - mu2;

  16. X = [ X1; X2 ];
  17. labels = [ ones(nFeats,1); 2*ones(nFeats,1) ];


  18. % Part (a): classify each of the vectors in X1 and X2 using the algebraically simplified notation:
  19. %
  20. rhs = 0.5 * ( dot(mu1,mu1) - dot(mu2,mu2) );
  21. lhs = mean_diff' * X';

  22. % If lhs > rhs we are selecting this sample from class #1 otherwise from class #2:
  23. %
  24. class_decision = lhs > rhs;
  25. choosen_class = zeros(2*nFeats,1);
  26. choosen_class(find(class_decision==1)) = 1;
  27. choosen_class(find(class_decision~=1)) = 2;

  28. P_correct = sum(choosen_class == labels)/(2*nFeats);
  29. P_error   = 1 - P_correct;

  30. % Calculate the optimal Bayes error rate (using the results from Problem~2.9):
  31. %
  32. addpath('../../../Duda_Hart_Stork/Code/Chapter2/ComputerExercises');
  33. dm = mahalanobis(mu1,mu2,sigmasSquared*eye(d));
  34. P_B = 1 - normcdf( 0.5 * dm );

  35. fprintf('empirical P_e= %10.6f; analytic Bayes P_e= %10.6f\n',P_error,P_B);


  36. % Part (b): classify each of the vectors in X1 and X2 (using the algebraically simplified notation):
  37. %
  38. rhs = 0.5 * ( dot(mu1,mu1) - dot(mu2,mu2) ) + sigmasSquared * log(2);
  39. lhs = mean_diff' * X';

  40. % If lhs > rhs we are selecting this sample from class #1 otherwise from class #2:
  41. %
  42. class_decision = lhs > rhs;
  43. choosen_class = zeros(2*nFeats,1);
  44. choosen_class(find(class_decision==1)) = 1;
  45. choosen_class(find(class_decision~=1)) = 2;

  46. P_correct = sum(choosen_class == labels)/(2*nFeats);
  47. P_error   = 1 - P_correct;

  48. % extract the expected loss using this classifier:
  49. %
  50. L_12 = sum( choosen_class(1:nFeats)~=1 );
  51. L_21 = sum( choosen_class(nFeats+1:end)~=2 );
  52. r_hat = ( L_12 + 0.5 * L_21 )/(2*nFeats);

  53. fprintf('P_correct= %10.6f; r_hat= %10.6f\n',P_correct,r_hat);
  54. http://waxworksmath.com/Authors/N_Z/Theodoridis/WWW/chapter_2.html
复制代码

9
Lisrelchen 发表于 2016-3-23 08:35:13
  1. close all; clc; clear;

  2. rand('seed',0);
  3. randn('seed',0);

  4. mu1 = [ 1, 1 ].';
  5. mu2 = [ 1.5, 1.5 ].';
  6. sigmaCov = [ 1.01, 0.2; 0.2, 1.01 ];

  7. nFeats = 10000;

  8. X1 = mvnrnd( mu1, sigmaCov, nFeats );
  9. X2 = mvnrnd( mu2, sigmaCov, nFeats );

  10. h1 = plot( X1(:,1), X1(:,2), '.b' ); hold on;
  11. h2 = plot( X2(:,1), X2(:,2), '.r' ); hold on;
  12. legend( [h1,h2], {'class 1', 'class 2'} );

  13. X = [ X1; X2 ];
  14. labels = [ ones(nFeats,1); 2*ones(nFeats,1) ];

  15. % Part (a): classify each of the vectors in X1 and X2 (directly using the likelihood ratio)
  16. %
  17. lratio = mvnpdf( X, mu1.', sigmaCov ) ./ mvnpdf( X, mu2.', sigmaCov );

  18. % If lratio > 1 we are selecting this sample from class #1 otherwise from class #2:
  19. %
  20. class_decision = lratio > 1;
  21. choosen_class = zeros(2*nFeats,1);
  22. choosen_class(find(class_decision==1)) = 1;
  23. choosen_class(find(class_decision~=1)) = 2;

  24. P_correct = sum(choosen_class == labels)/(2*nFeats);
  25. P_error   = 1 - P_correct;

  26. % Calculate the optimal Bayes error rate (using the results from Problem~2.9):
  27. %
  28. addpath('../../../Duda_Hart_Stork/Code/Chapter2/ComputerExercises');
  29. dm = mahalanobis(mu1,mu2,sigmaCov);
  30. P_B = 1 - normcdf( 0.5 * dm );

  31. fprintf('empirical P_e= %10.6f; analytic Bayes P_e= %10.6f\n',P_error,P_B);

  32. % Part (b): classify each of the vectors in X1 and X2 (directly using the likelihood ratio)
  33. %

  34. % If lratio > 1/2 we are selecting this sample from class #1 otherwise from class #2:
  35. %
  36. class_decision = lratio > 1/2;
  37. choosen_class = zeros(2*nFeats,1);
  38. choosen_class(find(class_decision==1)) = 1;
  39. choosen_class(find(class_decision~=1)) = 2;

  40. P_correct = sum(choosen_class == labels)/(2*nFeats);
  41. P_error   = 1 - P_correct;

  42. % extract the expected loss using this classifier:
  43. %
  44. L_12 = sum( choosen_class(1:nFeats)~=1 );
  45. L_21 = sum( choosen_class(nFeats+1:end)~=2 );
  46. r_hat = ( L_12 + 0.5 * L_21 )/(2*nFeats);

  47. fprintf('P_correct= %10.6f; r_hat= %10.6f\n',P_correct,r_hat);
  48. http://waxworksmath.com/Authors/N_Z/Theodoridis/WWW/chapter_2.html
复制代码

10
Lisrelchen 发表于 2016-3-23 08:37:30
Nearest Neighbors Classification using Matlab
  1. close all; clc; clear;

  2. rand('seed',0);
  3. randn('seed',0);

  4. nTraining = 50;
  5. nTesting  = 50;

  6. % Generate training data like for Problem 2.12:
  7. %
  8. mu1 = [ 1, 1 ].';
  9. mu2 = [ 1.5, 1.5 ].';
  10. sigmasSquared = 0.2;
  11. d = size(mu1,1);

  12. nFeats = nTraining;

  13. X1 = mvnrnd( mu1, sigmasSquared*eye(d), nFeats );
  14. X2 = mvnrnd( mu2, sigmasSquared*eye(d), nFeats );

  15. if( 0 )
  16.   h1 = plot( X1(:,1), X1(:,2), '.b' ); hold on;
  17.   h2 = plot( X2(:,1), X2(:,2), '.r' ); hold on;
  18.   legend( [h1,h2], {'class 1', 'class 2'} );
  19. end

  20. X_train = [ X1; X2 ];
  21. labels_train = [ ones(nFeats,1); 2*ones(nFeats,1) ];

  22. % Generate 100 new points from each class to classify:
  23. %
  24. nFeats = nTesting;

  25. X1 = mvnrnd( mu1, sigmasSquared*eye(d), nFeats );
  26. X2 = mvnrnd( mu2, sigmasSquared*eye(d), nFeats );

  27. X_test = [ X1; X2 ];
  28. labels_test = [ ones(nFeats,1); 2*ones(nFeats,1) ];  

  29. % Classify each of the vectors in X_test using the NN and 3NN rules
  30. %
  31. addpath('../../../Duda_Hart_Stork/BookSupplements/ClassificationToolbox/Src');
  32. for nni = 1:11
  33.   test_targets = Nearest_Neighbor( X_train.', labels_train, X_test.', nni);
  34.   P_NN_error = sum( test_targets(:) ~= labels_test(:) )/length(test_targets);
  35.   fprintf('P_e %2dNN= %10.6f; \n',nni,P_NN_error);
  36. end

  37. % Calculate the optimal Bayes error rate (using the results from Problem~2.9):
  38. %
  39. addpath('../../../Duda_Hart_Stork/Code/Chapter2/ComputerExercises');
  40. dm = mahalanobis(mu1,mu2,sigmasSquared*eye(d));
  41. P_B = 1 - normcdf( 0.5 * dm );

  42. fprintf('P_B= %10.6f\n',P_B);
复制代码

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-9 06:30