Neural Network -Human Brain

ARTIFICIAL INTELLIGENCE

PART II

OpenAI has introduced breakthrough products such as GPT-3, DALL-E, and ChatGPT, each pushing the boundaries of what AI can achieve. These achievements demonstrated OpenAI’s technical prowess but have also sparked global conversations about the future of human-computer interaction. They create works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content. ChatGPT, along with text-to-image tools such as DALL-E 2 and Stable Diffusion, is part of a new wave of software called generative AI.

Beginning of (AI) – The seeds for the real AI revolution were germinated in the fertile soil of the University of Toronto, where Geoffrey Hinton and his colleagues developed the techniques of deep learning, which would later become the basis of generative AIGenerative AI depicts the technique & technology to create new information. The generative AI models we use today are relatively theory-free—their emergent understanding of the world comes from neural-net machine learning techniques and mountains of data (see Part I). This technique has indeed made AI more human like.

GPT stands for Generative Pre-training Transformer. Not an exact AI but an AI model built by Open AI. ChatGPT is an app. GPT is the brain behind that app. With research and development GPT has several versions available – choosing o3 for reasoning, GPT-4.1 for coding, and GPT-4o for regular chats, GPT-5 (soon available) will pick the tools it needs to answer your prompt on its own.

The T in ChatGPT stands for” transformer.” Transformers are a system that allows machines to generate humanlike text. The transformer has become critical to the new wave of generative AI that can produce realistic text, images, videos, DNA sequences, and many other kinds of Data. The transformer’s invention (by Google) in 2017 was about as impactful to the field of AI as the advent of smartphones was to customers.  Transformers also broadened the scope of what AI engineers could do. Handle far more data, process human language much more quickly.

Transformers broke chatbots out of any binding rule. Now Chatbots can deal with nuance and slang. They can refer back to the thing that has been said a few sentences earlier, can handle almost any random query and give a personalized answer. To many AI researchers, that means a step toward AGI, opening a debate about whether computers were starting to “understand language in the same way humans did or, if they were still just processing it through math-based predictions.

The GPU, or graphics processing unit – these are powerful semiconductors (chips) made by Nvidia, runs most of the servers training AI models today. This is a specialized chip originally designed to render images and video. But in the world of AI, GPUs have become the powerhouse behind large language models (LLMs) like ChatGPT.

Unlike CPUs (central processing units), which handle one task at a time very efficiently, GPUs are built to perform thousands of simple calculations simultaneously. That parallel processing ability makes them perfect for training and running AI models, which rely on massive amounts of data and mathematical operations.

Google engineers developed LaMDA (Language Model for Dialogue Applications). This was the company’s Large Language Model project that was based on the Transformer. Large Language Models (LLMs) themselves are very interesting. Their responses were mostly scripted, and they’d often make wacky mistakes. To make computers better at talking and listening

Conclusion: Both Data & Computing power is needed for AI Research, Training & Development.

 END of Part II

8 thoughts on “AI – About GPT, GPU, LaMDA, LLM ”

Leave a Reply to Iris4965 Cancel Reply

Your email address will not be published. Required fields are marked *

Scroll to Top