This project involves generating summaries of AMI meeting transcripts.Summarization can be done in two ways one is the Abstractive Summarization and Extractive Summarization. This work provide the analysis of different methods proposed for abstractive summarization using SOTA Language models and also describe how to tackle the problem of summarization on longer documents as in the case of AMI meeting corpus,this is a very popular research area of adapting transformers to work efficiently(without high Computation Cost) with longer documents as one of the problem for longer text summarization is the dependencny of the models on the Length of sequences due to the full Attention mechanism(polynomial order).
This project is Anomaly detection in closing prices of S&P500(Stock market index) time series data using LSTM autoencoder.As LSTM network is best for time series Data so i trained a LSTM autoencoder using the Keras API with Tensorflow 2 as the backend to detect anomalies (Sudden price changes) in the S&P 500 index.
I have used two Networks here one is Generator which takes random noise for inspiration and tries to generate a face sample.Second is Discriminator which takes a face sample and tries to tell if it's real or fake. i.e it predicts the probability of input image being a real face.There is snippet attached of generated faces from trained model after training for 15k iterations.
The Dataset is from kaggle which contains articles of fake and real news.Preprocessing is done on the text like scrapping html tags,padding,removing stopwords.Glove(100d) Vector representation of words to capture better relation between words.Model architecture comprises of two stacked LSTM layers.Got 99.87% accuracy on training set and 99.92 on test set.
An image to caption model which can predict caption for images.Model architecture is a CNN encoder and RNN Decoder.Pretrained InceptionV3 is used for CNN Encoder and its last hidden layer is used as Embedding.Used these embeddingg as an initial state of RNN.