Gyan is AVP and Engineering Head at Times Network. He is a technology leader and AI-ML expert with 25+ of experience in hands-on architecture, design, and development of a broad range of software technologies. Gyan Likes writing blogs in Technology, especially in Artificial Intelligence, Machine Learning, NLP, and Data Science. In his blogs, Gyan tries to explain complex concepts in simple language, mostly supported by very simple example code (with the corresponding GitHub link) that is very easy to understand. Gyan has done B. Tech. from IIT Kanpur, M.Tech. From IIT Madras and AI Professional Program from Stanford University (2019-2021).

How Ashish Vaswani’s Attention Mechanism Transformed Generative AI – Gift from Indian to the World

Evaluating embeddings is central to modern machine learning, because embeddings define how models internally represent meaning—whether it’s a sentence, an image patch, an audio segment, or a video frame. The structure, separability, and contextual richness of these embeddings determine a model’s overall intelligence. Earlier architectures like RNNs and LSTMs struggled… Continue reading

How AI Is Transforming Software Development – And What It Means for Developers, Startups & Big Tech

The last two years have reshaped the software industry at a pace we’ve never seen before. With the rise of AI-assisted development, the entire lifecycle of building digital products—from writing code to designing UI, building full apps, testing, deployment, and even documentation—is becoming dramatically faster and more efficient. What once… Continue reading

NLP – SVD based Word Embedding

For calculating the word embeddings, intuition has been used that word’s meaning should be represented by the words that frequently appear nearby that word.  intuition has been used that word’s meaning should be represented by the words that frequently appear nearby that word. In SVD based embedding methods, first, we create the matrix of co-occurrence, then we reduce the dimensionality of the matrix using SVD to get the embeddings. Continue reading

NLP – TF-IDF (Term Frequency- Inverse Document Frequency) model

To overcome the limitation of common words in the Bag of Word (BOW) methodology, a technique called TF-IDF (Term Frequency- Inverse Document Frequency) has been created. It gives the importance of any term in a document if it is occurring multiple times in that document, and also penalize the importance of a term in a document in case it is occurring very frequently in various documents in the corpus. Continue reading