In Keras we could use LSTM,GRU,SimpleRNN just as simple as in sklearn.

here we mainly wonder to share some thing about LSTM,which could be divied into two subclass LSTM with statful or without stateful

Input_shape, [samples，timesteps，input_dim] here, samples is the number of batch examples, timesteps is the time length , input_dim means each timestep dim of the vector.

return_sequneces =True,pay attention this one , for if True you will get the[samples，timesteps，input_dim ] as a 3D tensor, or you will get a 2D tensor [samples, output_dims]

we pay attention to this beause sometimes we need reput the data into other layers like CNN to create a complicated nerual networks.

@ we have the Timedistributed wapper for RNN, what is the porpose of this functions, it just make a fully connecivityed layers and output the different dimension.

as here :

input_shape=[32,10,16] %% here we have 32 samples, each smaple has 10 timesteps, each timestep has a vector of 16 dim.

modle.add(TimeDistributed(Dense(8,activation=’relu’)))

the output will be :[32,10,8]. so it just make the each time step data to a 8 neurons layers ,and then become the shape of [10 8] from [10 16], reduce the dim of data.

here we did not make the return stateful=True, means after a RNN loop, the data shape will become to [32,8] from [32,10,16].

it depends wether if you need to have the stateful.

# as the first layer in a Sequential model model = Sequential() model.add(LSTM(32, input_shape=(10, 64))) %% 输出32个单元 输入10个step，每个step 64个维度 # now model.output_shape == (None, 32) %% 也就是说 一个时间数列进入，出来是一个32 dim的向量 # note: `None` is the batch dimension. # the following is identical: model = Sequential() model.add(LSTM(32, input_dim=64, input_length=10)) # for subsequent layers, no need to specify the input size: model.add(LSTM(16)) 如何我们需要堆栈RNN,那么必须将return_sequences=True, # to stack recurrent layers, you must use return_sequences=True # on any recurrent layer that feeds into another recurrent layer. # note that you only need to specify the input size on the first layer. model = Sequential() model.add(LSTM(64, input_dim=64, input_length=10, return_sequences=True)) %% 这里返回[None,10,64]的3D张量 model.add(LSTM(32, return_sequences=True)) %% 这里返回[None,10,32]的张量 model.add(LSTM(10)) 这里返回【None,10】的张量 后续可以添加全连接以及softmax即可训练堆栈式RNN

ConvLSTM2D是一个LSTM网络，但它的输入变换和循环变换是通过卷积实现的

Staful means next batch data will be filterd by the last stateful output.

三维卷积对三维的输入进行滑动窗卷积，当使用该层作为第一层时，应提供input_shape参数。例如input_shape = (3,10,128,128)代表对10帧128*128的彩色RGB图像进行卷积。数据的通道位置仍然有data_format参数指定