0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

MATLAB殘差神經(jīng)網(wǎng)絡(luò)設(shè)計

冬至子 ? 來源:matlab學(xué)習(xí)之家 ? 作者:matlab學(xué)習(xí)之家 ? 2023-06-02 16:39 ? 次閱讀

我們都知道在CNN網(wǎng)絡(luò)中,輸入的是圖片的矩陣,也是最基本的特征,整個CNN網(wǎng)絡(luò)就是一個信息提取的過程,從底層的特征逐漸抽取到高度抽象的特征,網(wǎng)絡(luò)的層數(shù)越多也就意味這能夠提取到的不同級別的抽象特征更加豐富,并且越深的網(wǎng)絡(luò)提取的特征越抽象,就越具有語義信息。但神經(jīng)網(wǎng)絡(luò)越深真的越好嗎?我們可以看下面一張圖片,圖中描述了不同深度的傳統(tǒng)神經(jīng)網(wǎng)絡(luò)效果對比圖,顯然神經(jīng)網(wǎng)絡(luò)越深效果不一定好。

圖片

對于傳統(tǒng)CNN網(wǎng)絡(luò),網(wǎng)絡(luò)深度的增加,容易導(dǎo)致梯度消失和爆炸。針對梯度消失和爆炸的解決方法一般是正則初始化和中間的正則化層,但是這會導(dǎo)致另一個問題,退化問題,隨著網(wǎng)絡(luò)層數(shù)的增加,在訓(xùn)練集上的準(zhǔn)確率卻飽和甚至下降了。為此,殘差神經(jīng)網(wǎng)絡(luò)應(yīng)運而生。

一、算法原理

殘差網(wǎng)絡(luò)通過加入 shortcut connections,變得更加容易被優(yōu)化。包含一個 shortcut connection 的幾層網(wǎng)絡(luò)被稱為一個殘差塊(residual block),如下圖所示。

圖片

普通的平原網(wǎng)絡(luò)與深度殘差網(wǎng)絡(luò)的最大區(qū)別在于,深度殘差網(wǎng)絡(luò)有很多旁路的支線將輸入直接連到后面的層,使得后面的層可以直接學(xué)習(xí)殘差,這些支路就叫做shortcut。傳統(tǒng)的卷積層或全連接層在信息傳遞時,或多或少會存在信息丟失、損耗等問題。ResNet 在某種程度上解決了這個問題,通過直接將輸入信息繞道傳到輸出,保護(hù)信息的完整性,整個網(wǎng)絡(luò)則只需要學(xué)習(xí)輸入、輸出差別的那一部分,簡化學(xué)習(xí)目標(biāo)和難度。

二、代碼實戰(zhàn)

構(gòu)建19層ResNet網(wǎng)絡(luò),以負(fù)荷預(yù)測為例
%%
clc
clear


close all
load Train.mat
% load Test.mat
Train.weekend = dummyvar(Train.weekend);
Train.month = dummyvar(Train.month);
Train = movevars(Train,{'weekend','month'},'After','demandLag');
Train.ts = [];




Train(1,:) =[];
y = Train.demand;
x = Train{:,2:5};
[xnorm,xopt] = mapminmax(x',0,1);
[ynorm,yopt] = mapminmax(y',0,1);


xnorm = xnorm(:,1:1000);
ynorm = ynorm(1:1000);


k = 24;           % 滯后長度


% 轉(zhuǎn)換成2-D image
for i = 1:length(ynorm)-k


    Train_xNorm{:,i} = xnorm(:,i:i+k-1);
    Train_yNorm(i) = ynorm(i+k-1);
    Train_y{i} = y(i+k-1);
end
Train_x = Train_xNorm';


ytest = Train.demand(1001:1170);
xtest = Train{1001:1170,2:5};
[xtestnorm] = mapminmax('apply', xtest',xopt);
[ytestnorm] = mapminmax('apply',ytest',yopt);
% xtestnorm = [xtestnorm; Train.weekend(1001:1170,:)'; Train.month(1001:1170,:)'];
xtest = xtest';
for i = 1:length(ytestnorm)-k
    Test_xNorm{:,i} = xtestnorm(:,i:i+k-1);
    Test_yNorm(i) = ytestnorm(i+k-1);
    Test_y(i) = ytest(i+k-1);
end
Test_x = Test_xNorm';
x_train = table(Train_x,Train_y');
x_test = table(Test_x);
%% 訓(xùn)練集和驗證集劃分
% TrainSampleLength = length(Train_yNorm);
% validatasize = floor(TrainSampleLength * 0.1);
% Validata_xNorm = Train_xNorm(:,end - validatasize:end,:);
% Validata_yNorm = Train_yNorm(:,TrainSampleLength-validatasize:end);
% Validata_y = Train_y(TrainSampleLength-validatasize:end);
% 
% Train_xNorm = Train_xNorm(:,1:end-validatasize,:);
% Train_yNorm = Train_yNorm(:,1:end-validatasize);
% Train_y = Train_y(1:end-validatasize);
%% 構(gòu)建殘差神經(jīng)網(wǎng)絡(luò)
lgraph = layerGraph();
tempLayers = [
    imageInputLayer([4 24],"Name","imageinput")
    convolution2dLayer([3 3],32,"Name","conv","Padding","same")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    batchNormalizationLayer("Name","batchnorm")
    reluLayer("Name","relu")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    additionLayer(2,"Name","addition")
    convolution2dLayer([3 3],32,"Name","conv_1","Padding","same")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    batchNormalizationLayer("Name","batchnorm_1")
    reluLayer("Name","relu_1")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    additionLayer(2,"Name","addition_1")
    convolution2dLayer([3 3],32,"Name","conv_2","Padding","same")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    batchNormalizationLayer("Name","batchnorm_2")
    reluLayer("Name","relu_2")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    additionLayer(2,"Name","addition_2")
    convolution2dLayer([3 3],32,"Name","conv_3","Padding","same")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_3")];
lgraph = addLayers(lgraph,tempLayers);


tempLayers = [
    additionLayer(2,"Name","addition_3")
    fullyConnectedLayer(1,"Name","fc")
    regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);


% 清理輔助變量
clear tempLayers;


lgraph = connectLayers(lgraph,"conv","batchnorm");
lgraph = connectLayers(lgraph,"conv","addition/in2");
lgraph = connectLayers(lgraph,"relu","addition/in1");
lgraph = connectLayers(lgraph,"conv_1","batchnorm_1");
lgraph = connectLayers(lgraph,"conv_1","addition_1/in2");
lgraph = connectLayers(lgraph,"relu_1","addition_1/in1");
lgraph = connectLayers(lgraph,"conv_2","batchnorm_2");
lgraph = connectLayers(lgraph,"conv_2","addition_2/in2");
lgraph = connectLayers(lgraph,"relu_2","addition_2/in1");
lgraph = connectLayers(lgraph,"conv_3","batchnorm_3");
lgraph = connectLayers(lgraph,"conv_3","addition_3/in2");
lgraph = connectLayers(lgraph,"relu_3","addition_3/in1");


plot(lgraph);
analyzeNetwork(lgraph);
%% 設(shè)置網(wǎng)絡(luò)參數(shù)
maxEpochs = 60;
miniBatchSize = 20;
options = trainingOptions('adam', ...
 'MaxEpochs',maxEpochs, ...
 'MiniBatchSize',miniBatchSize, ...
 'InitialLearnRate',0.01, ...
 'GradientThreshold',1, ...
 'Shuffle','never', ...
 'Plots','training-progress',...
 'Verbose',0);


net = trainNetwork(x_train,lgraph ,options);


Predict_yNorm = predict(net,x_test);
Predict_y = double(Predict_yNorm)
plot(Test_y)
hold on 
plot(Predict_y)
legend('真實值','預(yù)測值')

網(wǎng)絡(luò)框架:

圖片

網(wǎng)絡(luò)分析:

圖片

網(wǎng)絡(luò)訓(xùn)練:

圖片

預(yù)測結(jié)果:

圖片

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 神經(jīng)網(wǎng)絡(luò)

    關(guān)注

    42

    文章

    4718

    瀏覽量

    100080
  • MATLAB仿真
    +關(guān)注

    關(guān)注

    4

    文章

    175

    瀏覽量

    19847
  • cnn
    cnn
    +關(guān)注

    關(guān)注

    3

    文章

    349

    瀏覽量

    22000
  • resnet
    +關(guān)注

    關(guān)注

    0

    文章

    12

    瀏覽量

    3146
收藏 人收藏

    評論

    相關(guān)推薦

    matlab 神經(jīng)網(wǎng)絡(luò) 數(shù)學(xué)建模數(shù)值分析

    matlab神經(jīng)網(wǎng)絡(luò) 數(shù)學(xué)建模數(shù)值分析 精通的可以討論下
    發(fā)表于 09-18 15:14

    神經(jīng)網(wǎng)絡(luò)Matlab程序

    神經(jīng)網(wǎng)絡(luò)Matlab程序
    發(fā)表于 09-15 12:52

    matlab小波神經(jīng)網(wǎng)絡(luò)源程序下載

    基于MATLAB的有關(guān)小波與神經(jīng)網(wǎng)絡(luò)緊致結(jié)合的源程序[hide] [/hide]
    發(fā)表于 02-22 15:50

    matlab神經(jīng)網(wǎng)絡(luò)30個案例分析源碼

    matlab神經(jīng)網(wǎng)絡(luò)30個案例分析源碼
    發(fā)表于 12-19 14:51

    MATLAB神經(jīng)網(wǎng)絡(luò)

    MATLAB神經(jīng)網(wǎng)絡(luò)
    發(fā)表于 07-08 15:17

    什么是深度收縮網(wǎng)絡(luò)

    。  在一定程度上,深度收縮網(wǎng)絡(luò)的工作原理,可以理解為:通過注意力機制注意到不重要的特征,然后通過軟閾值化將它們置為零;或者說,通過注意力機制注意到重要的特征,將它們保留下來,從而加強深度
    發(fā)表于 11-26 06:33

    Matlab神經(jīng)網(wǎng)絡(luò)工具箱是什么? 它在同步中的應(yīng)用有哪些?

    Matlab神經(jīng)網(wǎng)絡(luò)工具箱是什么?Matlab神經(jīng)網(wǎng)絡(luò)工具箱在同步中的應(yīng)用有哪些?
    發(fā)表于 04-26 06:42

    卷積神經(jīng)網(wǎng)絡(luò)模型發(fā)展及應(yīng)用

    地介紹了卷積 神經(jīng)網(wǎng)絡(luò)的發(fā)展歷史,然后分析了典型的卷積神經(jīng) 網(wǎng)絡(luò)模型通過堆疊結(jié)構(gòu)、網(wǎng)中網(wǎng)結(jié)構(gòu)、結(jié)構(gòu)以及 注意力機制提升模型性能的方法,并
    發(fā)表于 08-02 10:39

    matlab神經(jīng)網(wǎng)絡(luò)應(yīng)用設(shè)計

    matlab神經(jīng)網(wǎng)絡(luò)應(yīng)用設(shè)計詳細(xì)的介紹了matlab神經(jīng)網(wǎng)絡(luò)的結(jié)合
    發(fā)表于 02-23 10:47 ?0次下載

    matlab神經(jīng)網(wǎng)絡(luò)應(yīng)用設(shè)計》pdf下載

    matlab神經(jīng)網(wǎng)絡(luò)應(yīng)用設(shè)計》電子資料下載
    發(fā)表于 01-13 10:07 ?0次下載

    基于深度神經(jīng)網(wǎng)絡(luò)的遠(yuǎn)程監(jiān)督關(guān)系抽取模型

    基于卷積神經(jīng)網(wǎng)絡(luò)的遠(yuǎn)程監(jiān)督關(guān)系抽取方法提取的特征單一,且標(biāo)準(zhǔn)交叉熵?fù)p失函數(shù)未能較好處理數(shù)據(jù)集中正負(fù)樣本比例不均衡的情況。為此,提出一種基于深度神經(jīng)網(wǎng)絡(luò)的遠(yuǎn)程監(jiān)督關(guān)系抽取模型,通過改
    發(fā)表于 05-24 17:06 ?3次下載

    基于神經(jīng)網(wǎng)絡(luò)的微型電機轉(zhuǎn)子焊點圖像檢測

    基于神經(jīng)網(wǎng)絡(luò)的微型電機轉(zhuǎn)子焊點圖像檢測
    發(fā)表于 07-02 14:56 ?23次下載

    如何使用MATLAB神經(jīng)網(wǎng)絡(luò)工具箱

    神經(jīng)網(wǎng)絡(luò)是一種模擬人腦神經(jīng)元網(wǎng)絡(luò)的計算模型,廣泛應(yīng)用于各種領(lǐng)域,如圖像識別、語音識別、自然語言處理等。在MATLAB中,可以使用神經(jīng)網(wǎng)絡(luò)工具箱(Neural Network Toolb
    的頭像 發(fā)表于 07-03 10:34 ?1468次閱讀

    如何利用Matlab進(jìn)行神經(jīng)網(wǎng)絡(luò)訓(xùn)練

    Matlab作為一款強大的數(shù)學(xué)計算軟件,廣泛應(yīng)用于科學(xué)計算、數(shù)據(jù)分析、算法開發(fā)等領(lǐng)域。其中,Matlab神經(jīng)網(wǎng)絡(luò)工具箱(Neural Network Toolbox)為用戶提供了豐富的函數(shù)和工具
    的頭像 發(fā)表于 07-08 18:26 ?1143次閱讀

    網(wǎng)絡(luò)是深度神經(jīng)網(wǎng)絡(luò)

    網(wǎng)絡(luò)(Residual Network,通常簡稱為ResNet) 是深度神經(jīng)網(wǎng)絡(luò)的一種 ,其獨特的結(jié)構(gòu)設(shè)計在解決深層網(wǎng)絡(luò)訓(xùn)練中的梯度消失
    的頭像 發(fā)表于 07-11 18:13 ?762次閱讀