In Generative AI using Large Language Models (LLMs) Learn the basics of the way the generative AI operates, as well as how to implement the technology in real world applications. In the course you'll be able to: - Understand the concept of generative AI and the essential elements of a typical AI generative AI life cycle, from gathering data and selecting models through performance evaluation, and deployment. - Explain in detail the transformer structure that drives LLMs and how they are trained and how fine-tuning allows LLMs to be tailored to various applications - Apply the laws of scaling to improve the function of the model across the size of the dataset as well as compute budget and inference requirements. - Use state-of-the modern training, tuning, inference tools, as well as deployment techniques to enhance the efficiency of models within the requirements of your project Examine the opportunities and challenges that the generative AI can bring to businesses following the sharing of stories from researchers in the field and professionals. developers who have a basic knowledge of the way LLMs function, and the best practices for the training and deployment of them, can make informed decisions for their businesses and also quickly create functional prototypes. This course will assist students in developing a practical understanding regarding how best to utilize this new and exciting technology. This is a course for intermediate level and you must have some prior experience in programming with Python to benefit the most from it. It is also recommended to know the fundamentals in machine learning including unsupervised and supervised learning loss functions, as well as the splitting of data into validation, training tests, and training sets. If you've taken this course, either the Machine Learning Specialization or Deep Learning Specialization from DeepLearning.AI You'll be able to attend this class and delve deeper into the basics of the generative AI.