双极形态网络:没有繁殖的神经元

如今,很难找到尚未提出要通过神经网络解决的问题。在许多问题中,甚至不再考虑使用其他方法。在这种情况下,顺理成章的是,为了追求“银弹”,研究人员和技术人员提供了越来越多的神经网络体系结构的新修改,这应该使申请人“全民幸福,一劳永逸,不让任何人感到生气!”但是,在工业问题中,通常会发现模型的准确性主要取决于训练样本的清洁度,大小和结构,并且神经网络模型需要合理的接口(例如,当逻辑答案应为可变长度列表时,这是令人不快的)。


另一件事是生产力,速度。在这里,对体系结构的依赖是直接的并且是可以预见的。但是,并不是所有的科学家都感兴趣。几个世纪以来,以思维方式瞄准一个神奇的世纪,计算能力将是不可想象的,并且能量是从空中提取的,这是令人愉悦的。但是,也有足够的普通人。对于他们来说,重要的是神经网络现在更紧凑,更快速,更节能。例如,在没有强大视频卡或需要节省电池的移动设备和嵌入式系统上工作时,这一点很重要。在这个方向上已经做了很多工作:这是小型整数神经网络,以及多余神经元的去除,张量分解分解等等。


尽管我们留下了在激活函数中使用乘法和非线性运算的机会,但我们设法从神经元内部的计算中删除了乘法,将它们替换为加法并获得了最大值。我们将提出的模型称为神经元的双极形态模型。



 


, , “ ” - . , , . . , . , , , .


, , , , . , , . . . Labor omnia vīcit improbus et dūrīs urgēns in rēbus egestās.


, — . 90- [1, 2]. . , [3], [4]. , , . [5], [6]. , . .


, , , , .



:


y(x,w)=σ(i=1Nwixi+wN+1)


, xw, σ.


(-), . , . , 4 , :


i=1Nxiwi=i=1Npi00xiwii=1Npi01xi|wi|i=1Npi10|xi|wi+i=1Npi11|xi||wi|,



pikj={1,  (1)kxi>0 and (1)jwi>00, 


. :


M=maxj(xjwj)k=i=1NxiwiM1


:


i=1Nxiwi=exp{lni=1Nxiwi}=exp{lnM(1+k)}=(1+k)explnM==(1+k)exp{ln(maxj(xjwj))}=(1+k)expmaxjln(xjwj)==(1+k)expmaxj(lnxj+lnwj)=(1+k)expmaxj(yj+vj)expmaxj(yj+vj),


yj— , vj=lnwj— . , , k1. 0kN1, , (k=0), — (k=N1). N. , — , . , , — , . - .


- . 1. ReLU 4 : . . , .


, , . , , . (, -), — .



. 1. .


, - :


BM(x,w)=expmaxj(lnReLU(xj)+vj0)expmaxj(lnReLU(xj)+vj1)expmaxj(lnReLU(xj)+vj0)+expmaxj(lnReLU(xj)+vj1),



vjk={ln|wj|,  (1)kwj>0, 


, . , . , , .



, , , : - ! , . ( 1) ( 2). ? , . , .


, -: - , , -, , . - : , , , .


, , , incremental learning — , . . - , . “” — ( 1), — ( 2). “” , , . , -, , , -.



MNIST


MNIST — , 60000 28 28. 10000 . 10% , — . . 2.



. 2. MNIST.


:


conv(n, w_x, w_y) — n w_x w_y;
fc(n) — n ;
maxpool(w_x, w_y) — max-pooling w_x w_y;
dropout(p) — dropout p;
relu — ReLU(x)=max(x,0);
softmax — softmax.


MNIST :


CNN1: conv1(30, 5, 5) — relu1 — dropout1(0,2) — fc1(10) — softmax1.


CNN2: conv1(40, 5, 5) — relu1 — maxpool1(2, 2) — conv2(40, 5, 5) — relu2 — fc1(200) — relu3 — dropout1(0,3) — fc2(10) — softmax1.


. 1. “” . () ().


1. MNIST. — , — .


1,1, +2,2, +
CNN1-98,72-98,72-
CNN1conv142,4798,5138,3898,76
CNN1conv1 — relu1 — dropout1 — fc126,89-19,8694,00
CNN2-99,45-99,45-
CNN2conv194,9099,4196,5799,42
CNN2conv1 — relu1 — maxpool1 — conv221,2598,6836,2399,37
CNN2conv1 — relu1 — maxpool1 — conv2 — relu2 — fc110,0174,9517,2599,04
CNN2conv1 — relu1 — maxpool1 — conv2 — relu2 — fc1 — dropout1 — relu3 — fc212,91-48,7397,86

-, , - . , - , . , .


: . , . : - .


MRZ


MRZ- , (. . 3). 280 000 21 17 37 MRZ, .



. 3. MRZ .


CNN3: conv1(8, 3, 3) — relu1 — conv2(30, 5, 5) — relu2 — conv3(30, 5, 5) — relu3 — dropout1(0,25) — fc1(37) — softmax1.


CNN4: conv1(8, 3, 3) — relu1 — conv2(8, 5, 5) — relu2 — conv3(8, 3, 3) — relu3 — dropout1(0,25) — conv4(12, 5, 5) — relu4 — conv5(12, 3, 3) — relu5 — conv6(12, 1, 1) — relu6 — fc1(37) — softmax1.


2. “” . () ().


, MNIST: -, , . - , - .


2. MRZ. — , — .


1,1, +2,2, +
CNN3-99,63-99,63-
CNN3conv197,7699,6483,0799,62
CNN3conv1 — relu1 — conv28,5999,4721,1299,58
CNN3conv1 — relu1 — conv2 — relu2 — conv33,6798,7936,8999,57
CNN3conv1 — relu1 — conv2 — relu2 — conv3 — relu3 — dropout1 — fc112,58-27,8493,38
CNN4-99,67-99,67-
CNN4conv191,2099,6693,7199,67
CNN4conv1 — relu1 — conv26,1499,5273,7999,66
CNN4conv1 — relu1 — conv2 — relu2 — conv323,5899,4270,2599,66
CNN4conv1 — relu1 — conv2 — relu2 — conv3 — relu3 — dropout1 — conv429,5699,0477,9299,63
CNN4conv1 — relu1 — conv2 — relu2 — conv3 — relu3 — dropout1 — conv4 — relu4 — conv534,1898,4517,0899,64
CNN4conv1 — relu1 — conv2 — relu2 — conv3 — relu3 — dropout1 — conv4 — relu4 — conv5 — relu5 — conv65,8398,0090,4699,61
CNN4conv1 — relu1 — conv2 — relu2 — conv3 — relu3 — dropout1 — conv4 — relu4 — conv5 — relu5 — conv6 -relu6 — fc14,70-27,5795,46


, , . , - . MNIST MRZ.


? , - . , (, ) . , — TPU, .


, , : , .


PS. ICMV 2019:
E. Limonova, D. Matveev, D. Nikolaev and V. V. Arlazarov, “Bipolar morphological neural networks: convolution without multiplication,” ICMV 2019, 11433 ed., Wolfgang Osten, Dmitry Nikolaev, Jianhong Zhou, Ed., SPIE, Jan. 2020, vol. 11433, ISSN 0277-786X, ISBN 978-15-10636-43-9, vol. 11433, 11433 3J, pp. 1-8, 2020, DOI: 10.1117/12.2559299.



  1. G. X. Ritter and P. Sussner, “An introduction to morphological neural networks,” Proceedings of 13th International Conference on Pattern Recognition 4, 709–717 vol.4 (1996).
  2. P. Sussner and E. L. Esmi, Constructive Morphological Neural Networks: Some Theoretical Aspects and Experimental Results in Classification, 123–144, Springer Berlin Heidelberg, Berlin, Heidelberg (2009).
  3. G. X. Ritter, L. Iancu, and G. Urcid, “Morphological perceptrons with dendritic structure,” in The 12th IEEE International Conference on Fuzzy Systems, 2003. FUZZ ’03., 2, 1296–1301 vol.2 (May 2003).
  4. G. X. Ritter and G. Urcid, “Lattice algebra approach to single-neuron computation,” IEEE Transactions on Neural Networks 14, 282–295 (March 2003).
  5. H. Sossa and E. Guevara, “Efficient training for dendrite morphological neural networks,” Neurocomputing 131, 132–142 (05 2014).
  6. E. Zamora and H. Sossa, “Dendrite morphological neurons trained by stochastic gradient descent,” in 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 1–8 (Dec 2016).

All Articles