In this webinar, the speaker Ashish Kumar will share his learnings (challenges and ways to overcome them) on building successful (deployed and being used in production) LLM applications. The use cases are from Healthcare & Finance industries and use Generative + Extractive approaches. To set the context and to enlarge the learning experience for wider audience, we want to talk about most common use cases of LLMs (based on what out clients are asking us to build), noticeable LLMs and cost/performance/accuracy optimization methods. We also provide a quasi-primer on what is behind the hood of LLM’s magic.
Large Language Models are predominantly used in fields that generate large volumes of data, such as healthcare, BFSI, Software and other businesses.
A brief intro to Transformers, Seq2Seq, Encoder-Decoder architectures
Deep dive into the most trending LLMs - GPT, LLama, PaLM, Alpaca etc. Open source vs proprietary,
reproducibility of output/prompts, high cost of training due to /token costs
talk to your PDF (extraction + vector db + LLM based generation), talk to your database ( agents and tools), Automating concierge tasks like booking tickets, writing and sending newsletters
Who should attend?
Data Science Experts, AI/ML leaders, Data Scientists, NLP Engineers, MLOPs Engineers, Data Engineers, Data Analysts, Full Stack Developers, Application Engineers, Product Managers.Register now
Ashish Kumar is a seasoned data science and engineering professional, a published author and a humanist. An IIT Madras grad and a Young India Fellow (Ashoka University), he has 11+ years of experience in implementing and deploying Data Science & ML solutions for challenging industry problems in both hands-on and leadership roles. He is fluent in Python, PySpark and various dialects of SQL. His specialities include NLP, Document search and retrieval, Preventive Maintenance, Financial mathematics and Generative AI. He also trains and mentors data science aspirants.