内容简介:During our school days, most of us would have encountered the reading comprehension section of our English paper. We would be given a paragraph or Essay based on which we need to answer several questions.How do we as humans approach this task at hand? We g
Summarizing text from news articles to generate meaningful headlines
Jun 14 ·15min read
During our school days, most of us would have encountered the reading comprehension section of our English paper. We would be given a paragraph or Essay based on which we need to answer several questions.
How do we as humans approach this task at hand? We go through the entire text, make sense of the context in which the question is asked and then we write answers. Is there a way we can use AI and deep learning techniques to mimic this behavior of us?
Automatic text summarization is a common problem in machine learning and natural language processing (NLP). There are two approaches to this problem.
- Extractive Summarization - Extractive text summarization done by picking up the most important sentences from the original text in the way that forms the final summary. We do some kind of extractive text summarization to solve our simple reading comprehension exercises. TextRank is a very popular extractive and unsupervised text summarization technique.
2. Abstractive Summarization-Abstractive text summarization , on the other hand, is a technique in which the summary is generated by generating novel sentences by either rephrasing or using the new words, instead of simply extracting the important sentences. For example, some questions in the reading comprehension might not be straightforward in such cases we do rephrasing or use new words to answer such questions.
We humans can easily do both kinds of text summarization. In this blog let us see how to implement abstractive text summarization using deep learning techniques.
Problem Statement
Given a news article text, we are going to summarize it and generate appropriate headlines.
Whenever any media account shares a news story on Twitter or in any social networking site, they provide a crisp headlines /clickbait to make users click the link and read the article.
Often media houses provide sensational headlines that serve as a clickbait. This is a technique often employed to increase clicks to their site.
Our problem statement is to generate headlines given article text. For this we are using the news_summary dataset. You can download the dataset here
Before we go through the code, let us learn some concepts needed for building an abstractive text summarizer.
Sequence to Sequence Model
Techniques like multi-layer perceptron(MLP) work well your input data is vector and convolutional neural networks(CNN) works very well if your input data is an image.
What if my input x is a sequence? What if x is a sequence of words. In most languages sequence of words matters a lot. We need to somehow preserve the sequence of words.
The core idea here is if output depends on a sequence of inputs then we need to build a new type of neural network which gives importance to sequence information, which somehow retains and leverages the sequence information.
We can build a Seq2Seq model on any problem which involves sequential information. In our case, our objective is to build a text summarizer where the input is a long sequence of words(in a text body), and the output is a summary (which is a sequence as well). So, we can model this as a Many-to-Many Seq2Seq problem.
A many to many seq2seq model has two building blocks- Encoder and Decoder. The Encoder-Decoder architecture is mainly used to solve the sequence-to-sequence (Seq2Seq) problems where the input and output sequences are of different lengths.
Generally, variants of Recurrent Neural Networks (RNNs), i.e. Gated Recurrent Neural Network (GRU) or Long Short Term Memory (LSTM), are preferred as the encoder and decoder components. This is because they are capable of capturing long term dependencies by overcoming the problem of vanishing gradient.
Encoder-Decoder Architecture
Let us see a high-level overview of Encoder-Decoder architecture and then see its detailed working in the training and inference phase.
Intuitively this is what happens in our encoder-decoder network:
1. We feed in our input (in our case text from news articles) to the Encoder unit. Encoder reads the input sequence and summarizes the information in something called the internal state vectors (in case of LSTM these are called the hidden state and cell state vectors).
2. The encoder generates something called the context vector, which gets passed to the decoder unit as input. The outputs generated by the encoder are discarded and only the context vector is passed over to the decoder.
3. The decoder unit generates an output sequence based on the context vector.
We can set up the Encoder-Decoder in 2 phases:
- Training phase
- Inference phase
Training phase
A.Encoder
In the training phase at every time step, we feed in words from a sentence one by one in sequence to the encoder. For example, if there is a sentence “I am a good boy”, then at time step t=1, the word I is fed, then at time step t=2, the word am is fed, and so on.
Say for example we have a sequence x comprising of words x1,x2,x3,x4 then the encoder in training phase looks like below:
The initial state of the LSTM unit is zero vector or it is randomly initiated. Now h1,c1 is the state of LSTM unit at time step t=1 when the word x1 of the sequence x is fed as input.
Similarly h2,c2 is the state of the LSTM unit at time step t=2 when the word x2 of the sequence x is fed as input and so on.
The hidden state (hi) and cell state (ci) of the last time step are used to initialize the decoder.
B.Decoder
Now the initial states of the decoder are initialized to the final states of the encoder. This intuitively means that the decoder is trained to start generating the output sequence depending on the information encoded by the encoder.
The target sequence is unknown while decoding the test sequence. So, we start predicting the target sequence by sending the first word into the decoder which would be always the < start> token. And the < end> token signals the end of the sentence.
Inference Phase
Now at the inference phase, we want our decoder to predict our output sequence(in our case headlines). After training, the model is tested on new source sequences for which the target sequence is unknown. So, we need to set up the inference architecture to decode a test sequence
At every time step, the LSTM unit in my decoder gives me outputs y¹,y²,y³…y^k. where k is the length of the output sequence. At time step t=1 output y¹ is generated, at time t=2 output y ^2 is generated and so on.
But in the testing stage as mentioned earlier we do not know what the length of our target sequence would be. How do we tackle this problem? Or in other words, how do we decode the test sequence. We follow the below steps for the same :
- Encode the entire input sequence and initialize the decoder with internal states of the encoder
- Pass < start> token as an input to the decoder
- Run the decoder for one time step with the internal states
- The output will be the probability for the next word. The word with the maximum probability will be selected.
- Pass the sampled word as an input to the decoder in the next timestep and update the internal states with the current time step
- Repeat steps 3–5 until we generate < end> token or hit the maximum length of the target sequence.
Disadvantages of Encoder-Decoder Network
- In the encoder-decoder network, a context vector is generated by our encoder which gets passed to the decoder as input. Now if our input sequence is large ( in our case the text from news articles will be mostly large), one single context vector cannot capture the essence of the input sequence.
- It is difficult for the encoder to memorize long sequences into a fixed-length vector
- .The Bilingual Evaluation Understudy Score , or BLEU for short, is a metric for evaluating a generated sentence to a reference sentence. A perfect match results in a score of 1.0, whereas a perfect mismatch results in a score of 0.0.
Researchers observed that the BLEU score deteriorates as the sentence length for the source and reference text increases. It does a reasonable job up-till a sentence length of 20, after that the score falls.
For our task both the source and target sentence length are higher than 20, hence we need to overcome this shortcoming of the encoder-decoder network.
Concept of Attention
- When humans read any lengthy paragraph, they pay attention to certain words then they change their attention to the next few words and so on.
- Intuitively think of your teacher correcting your 10 mark answer in your History paper. The teacher would have a key with them in which certain important points /words are there. So in your answer sheet your teacher would look for these important words in the key. More attention is paid to the keywords.
- Hence humans change their attention from one sequence of words to another sequence of words when given a lengthy input sequence
- So, instead of looking at all the words in the source sequence, we can increase the importance of specific parts of the source sequence that result in the target sequence. This is the basic idea behind the attention mechanism.
- Attention mechanism makes use of bidirectional RNNs The regular RNN is unidirectional as the sequence is processed from the first word to the last word. In bidirectional RNN we will have a connection in forward and reverse direction.
- So in addition to the forward connection, there is also a backward connection for each of the neurons.
- The outputs generated from the forward and the backward connection of neurons are concatenated together to give outputs y1^,y2^and so on. So we will have two back propagations one for the forward path in the backward direction and again for the backward path in the forward direction.
- The context vector is nothing but the weighted sum of outputs from the encoder.
There are 2 different classes of attention mechanism depending on the way the attended context vector is derived:
- Global Attention-Here, the attention is placed on all the source positions. In other words, all the hidden states of the encoder are considered for deriving the attended context vector:
- Local Attention-Here, the attention is placed on only a few source positions. Only a few hidden states of the encoder are considered for deriving the attended context vector.
We will be using global attention for our task at hand.
Code Walkthrough
Now that we have learned all the concepts lets dive deep into code. First, let us import all the necessary libraries
Custom Attention Layer
Keras does not officially support the attention layer. So, we can either implement our attention layer or use a third-party implementation. We will go with the latter option for this blog. You can download the attention layer from here and copy it in a different file called attention.py and then we can import the same.
Now let us read our dataset. Due to computational constraints we shall just load 20000 rows from our dataset.
以上所述就是小编给大家介绍的《Text Summarization from scratch using Encoder-Decoder network with Attention in Keras》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
深入理解TensorFlow:架构设计与实现原理
彭靖田、林健、白小龙 / 人民邮电出版社 / 2018-5-1 / 79.00元
本书以TensorFlow 1.2为基础,从基本概念、内部实现和实践等方面深入剖析了TensorFlow。书中首先介绍了TensorFlow设计目标、基本架构、环境准备和基础概念,接着重点介绍了以数据流图为核心的机器学习编程框架的设计原则与核心实现,紧接着还将TensorFlow与深度学习相结合,从理论基础和程序实现这两个方面系统介绍了CNN、GAN和RNN等经典模型,然后深入剖析了TensorF......一起来看看 《深入理解TensorFlow:架构设计与实现原理》 这本书的介绍吧!