1) The document summarizes recent papers on deep learning techniques for text summarization including neural attention models for sentence summarization and abstractive text summarization using sequence-to-sequence RNNs. 2) It also discusses papers on transformer models using attention mechanisms, mixture of experts layers in neural networks, and effective approaches to attention-based neural machine translation. 3) Recent advances in deep learning like attention mechanisms, transformer models, and mixture of experts layers have led to improvements in natural language tasks like text summarization and machine translation.