楼主: yuzhaoyu
2513 1

[有偿编程] 模仿书上的一个matlab代码,6.5下咋出错呢? [推广有奖]

  • 1关注
  • 81粉丝

已卖:1770份资源

院士

17%

还不是VIP/贵宾

-

威望
1
论坛币
2504 个
通用积分
1999.9245
学术水平
55 点
热心指数
82 点
信用等级
41 点
经验
3922 点
帖子
3214
精华
0
在线时间
2549 小时
注册时间
2007-11-5
最后登录
2025-12-4

楼主
yuzhaoyu 发表于 2013-3-15 15:22:13 |AI写论文
200论坛币

P=[0.88;0.89;0.98;0.91;0.86;0.98;0.82;0.90;0.75]

T=[0.81;1.0;1.0;0.67;0.01;0.97;1.0;0.95;1.0]

net=newff([0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1],[3 ,1],{ ‘tansig’, ‘tansig’}, ‘traingd’)

net=init(net)

net.trainParam.show=100;

net.trainParam.lr=0.05;

net.trainParam.epochs=300;

net.trainParam.goal=0.0001;

[net,tr]=train(net,p,t);

IW1=net.IW{1,1}

B1=net.B{1}

IW2=net.IW{2,1}

B2=net.B{2}

P1=[0.66;0.78;0.88;0.98;0.78;0.89;0.88;0.87;1.0];

T1=sim(net,p1);


报错是  ??? net=newff([0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1],[3 ,1],{ ‘tansig’, ‘tansig’}, ‘traingd’)
                                                             |
Error: Missing variable or function.   请帮忙 谢谢 达人

关键词:matlab代码 MATLAB matla atlab Lab net 模仿 matlab train

沙发
yuanxinqiang 发表于 2013-3-16 18:42:35
遗失了函数或变量.

newff


newff Create a feed-forward backpropagation network.

   Obsoleted in R2010b NNET 7.0.  Last used in R2010a NNET 6.0.4.
   The recommended function is feedforwardnet.

   Syntax

     net = newff(P,T,S)
     net = newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF)

   Description

     newff(P,T,S) takes,
       P  - RxQ1 matrix of Q1 representative R-element input vectors.
       T  - SNxQ2 matrix of Q2 representative SN-element target vectors.
       Si  - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
             (Output layer size SN is determined from T.)
     and returns an N layer feed-forward backprop network.

     newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF) takes optional inputs,
       TFi - Transfer function of ith layer. Default is 'tansig' for
             hidden layers, and 'purelin' for output layer.
       BTF - Backprop network training function, default = 'trainlm'.
       BLF - Backprop weight/bias learning function, default = 'learngdm'.
       PF  - Performance function, default = 'mse'.
       IPF - Row cell array of input processing functions.
             Default is {'fixunknowns','remconstantrows','mapminmax'}.
       OPF - Row cell array of output processing functions.
             Default is {'remconstantrows','mapminmax'}.
       DDF - Data division function, default = 'dividerand';
     and returns an N layer feed-forward backprop network.

     The transfer functions TF{i} can be any differentiable transfer
     function such as TANSIG, LOGSIG, or PURELIN.

     The training function BTF can be any of the backprop training
     functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.

     *WARNING*: TRAINLM is the default training function because it
     is very fast, but it requires a lot of memory to run.  If you get
     an "out-of-memory" error when training try doing one of these:

     (1) Slow TRAINLM training, but reduce memory requirements, by
         setting NET.efficiency.memoryReduction to 2 or more. (See HELP TRAINLM.)
     (2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
     (3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.

     The learning function BLF can be either of the backpropagation
     learning functions such as LEARNGD, or LEARNGDM.

     The performance function can be any of the differentiable performance
     functions such as MSE or MSEREG.

   Examples

     [inputs,targets] = simplefitdata;
     net = newff(inputs,targets,20);
     net = train(net,inputs,targets);
     outputs = net(inputs);
     errors = outputs - targets;
     perf = perform(net,outputs,targets)

   Algorithm

     Feed-forward networks consist of Nl layers using the DOTPROD
     weight function, NETSUM net input function, and the specified
     transfer functions.

     The first layer has weights coming from the input.  Each subsequent
     layer has a weight coming from the previous layer.  All layers
     have biases.  The last layer is the network output.

     Each layer's weights and biases are initialized with INITNW.

     Adaption is done with TRAINS which updates weights with the
     specified learning function. Training is done with the specified
     training function. Performance is measured according to the specified
     performance function

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群
GMT+8, 2025-12-9 10:51