Code Logic

Code logic is all about logic development and it aims at improving problem solving skills, by listening to this podcast you will get to know how to think and decompose a difficult problem into multiple simpler problems...

Collocations, Part Two (S3E2)

Hey guys, this is sarvesh again, I hope you enjoyed the recent episodes and as promised, this is another episode for collocation's. These techniques are generally and most commonly used for finding if the words are collocations or not. The generally principle behind this is hypothesis testing. Our null hypothesis being, assumption that the words are not collocations. To find out more, go ahead and listen to the episode. #NLP #NaturalLanguageProcessing #Learn #Something #New

01-20
07:28

Collocations, Part One (S3E1)

Hey guys, sorry for the long break. I was working from home and its difficult to record audio at home, anyways I have started the podcast back, I think we will be discussing about word embeddings and other embeddings later because we have a lot to learn before that. About this episode, this episode focuses about collocations, what comprises of collocations and two methods for collocations the first is based on frequency and second is based on mean and variance. I hope you learn something from this episode and enjoy it. See you in the next episode. We are starting a fresh, with season 3 because I think we were learning in an unclean manner. I always strive to make good content for my listeners and will try to make it as decent as possible. I might reupload previous episodes again. Good luck, Have fun and happy new year! #collocations #learn #learnsomethingnew #NLP

01-03
13:46

Word Embeddings - A simple introduction to word2vec

Hey guys welcome to another episode for word embeddings! In this episode we talk about another popularly used word embedding technique that is known as word2vec. We use word2vec to grab the contextual meaning in our vector representation. I've found this useful reading for word2vec. Do read it for an in depth explanation. p.s. Sorry for always posting episode after a significant delay, this is because I myself am learning various stuffs, I have different blogs to handle, multiple projects that are in place so my schedule almost daily is kinda packed. I hope you all get some value from my podcasts and helps you get an intuitive understanding of various topics. See you in the next podcast episode!

01-13
04:02

learn about TF-IDF model in Natural Language Processing

In this podcast episode we will talk about TF-IDF model in Natural Language Processing. TF-IDF model stands for term frequency inverse document frequency. We use TF-IDF model to give more weight to important words as compared with common words like the, a, in, there, where, etc. To learn python programming visit www.stacklearn.org. See you in the next podcast episode!

12-13
01:45

Bag of Words in Natural Language Processing

In this podcast episode we talk about bag of words model in natural language processing. Bag of Words model is simply a feature extraction method used in NLP. We mainly discuss about why bag of words model is required and what it is. In summary BOW is simply a set of tuples with words along with their frequency pairs. To learn more about BOW : visit this Gensim Introduction : visit this Also, to support me do visit www.stacklearn.org

10-09
04:05

Review of Preprocessing steps in NLP and More!

In this episode we review preprocessing steps such as making text lowercase, removing unwanted characters and other related cleaning tasks which we have discussed in the previous videos... we also talk about Gensim package in python and how it simplifies preprocessing in Natural language processing

09-24
03:02

Lemmatization in Natural Language Processing

In this podcast episode we will be talking about Lemmatization in natural language processing. It is a text normalization step which we need to perform to normalize words. Lemmatization improves on shortcomings of Stemming in Natural Language Processing and In this podcast episode we talk about that shortcoming and also how we can use lemmatization using nltk library. Learn Python: www.stacklearn.org Python package to save snippets : PyPi - codesnip

09-23
01:15

Stemming in Natural Language Processing

In this podcast episode we will be talking about stemming in natural language processing. It is a text normalization step which we need to perform to normalize words such that run, runs and running counts the same... stemming involves chopping off affixes such as ing, ly, etc.

09-17
01:24

Tokenization in Natural Language Processing

In this episode we discuss about tokenization in Natural Language Processing. As discussed in previous episode, tokenisation is an important step in data cleaning and it entails dividing a large piece of text into smaller chunks. In this episode we discuss some of the basic tokenizers available from nltk.tokenize in nltk. If you liked this episode, do follow and do connect with me on twitter @sarvesh0829 follow my blog at www.stacklearn.org. If you sell something locally, do it using BagUp app available at play store, It would help a lot.

09-14
02:12

Data Cleaning in Natural Language Provessing

In this episode we talk about various steps in data cleaning process in Natural Language Processing. Data cleaning is almost a given whenever you want to perform natural language processing onto the given text. Data cleaning in natural language processing involves tokenization, lowering the words, lemmatization, and so on. Aside from talking about that we also talk about how you can implement those briefly. To install codesnip mentioned in the last part open your terminal and write pip install codesnip

09-13
02:32

Natural language processing

A general discussion about natural language processing, in this episode we discuss what is contained within natural language processing and we discuss about topics which we will be discussing about in further episodes.

10-04
02:01

Introduction to word embeddings and One hot encoding in NLP

In this podcast episode we discuss about why word embeddings are required, what are they and we also discuss about one hot encodings. In next episode we will talk about specific techniques for word embeddings individually. Stay tuned. Sponsored by www.stacklearn.org

12-22
02:37

Searching Basics

In this podcast episode we talk about the basics for Searching in computer programming. We discuss 2 things, first is the naive way and other is binary search. These two techniques are apt for many applications and will help you get your way around the algorithm design.

07-09
03:31

Lets dive into linked list this time!

Hello everyone, this time we will learn about linked lists! I mean they are amazing and very useful. They are used for various purposes like creating stacks , queues, lists, various management systems like library management systems, admissions management systems , hospital management systems and the list goes on! In this podcast episode we look into what linked lists are and various types of linked lists.

05-21
06:23

Lets look into Stacks

Stacks are a linear data structures which we use in many applications and in various domains, in this podcast episode we look at basics about stacks and where they are used.

05-12
02:48

Lets look into Minimum Spanning Tree(part 2)

In this episode, we will be discussing about how we can implement a minimum spanning tree algorithm, while doing so we will also discuss union find data structure and how it helps us in finding weather including an edge will lead to a cycle in the tree or not.

03-20
04:42

Lets look into Minimum Spanning Trees(part 1)

So, in this podcast we discuss about what connected graphs are , what trees are, how they differ from each other and how to convert a graph into a tree. So basically after getting a grasp on these basic concepts, we will try to understand how we can get a minimum spanning tree from a given connected graph.

03-04
02:47

Lets learn a bit about queues

Queue's are linear data structures which are quite important in operating systems and are used to implement algorithms such as round robin algorithm, priority queue scheduling and multilevel queue scheduling... In this podcast we will have a glance on these concepts and will discuss a bit about how queue works and how we can implement it.

02-18
03:24

Palindromic number and finding reverse of a number

In this podcast we will mainly discuss about finding reverse of a given number, using which we will find if the given number is a palindromic number or not...

01-23
03:55

Factorials and code

In this episode we discuss about factorials, and how to code to get the factorial of a given number n, in an iterative manner which is method 1 and in a recursive method which is method 2.

01-16
02:24

Recommend Channels