Pobierz kartę szkolenia

Implement Generative AI engineering with Azure Databricks

kod szkolenia: DP-3028 / ENG DL 1d

This course covers generative AI engineering on Azure Databricks, using Spark to explore, fine-tune, evaluate, and integrate advanced language models. It teaches how to implement techniques like retrieval-augmented generation (RAG) and multi-stage reasoning, as well as how to fine-tune large language models for specific tasks and evaluate their performance. Learners will also explore responsible AI practices for deploying AI solutions and how to manage models in production using LLMOps (Large Language Model Operations) on Azure Databricks.

W celu uzyskania informacji skontaktuj się z działem handlowym. W celu uzyskania informacji skontaktuj się z działem handlowym.
2 500,00 PLN 3 075,00 PLN brutto

This course is designed for:

  • data scientists,
  • machine learning engineers,
  • other AI practitioners who want to build generative AI applications using Azure Databricks.

It is intended for professionals familiar with fundamental AI concepts and the Azure Databricks platform.

  1. Basics of using LLM in Azure Databicks – You'll learn how to use LLM in Azure Databricks.

    2.           Hands-on learning – Through the labs, you will gain experience that you can easily translate into real tasks at work.

    3.           Language model management – You will gain knowledge of how to use Azure Databricks.

    4.           Responsible AI – You will learn the principles of responsible development of AI-based solutions.

    5.           Understanding the idea of Databricks – You will get to know Databricks in practice in a way that will allow you to expand your experience with other functionalities of this service after the training.

     

1.Introduction to language models in Azure Databricks

2.Get started with a lakehouse in Microsoft Fabric

    • An overview of AI generation.
    • Overview of Large Language Models (LLMs).
    • Identify the key components of an LLM application.
    • Use LLMs for natural language processing (NLP) tasks.

3.Implement Retrieval Augmented Generation (RAG) with Azure Databricks

    • Familiarize yourself with the main concepts of the RAG workflow.
    • Preparing data for RAG.
    • Find relevant data using vector search.
    • Rearrange the downloaded results.

4.Implement multi-step inference in Azure Databricks

    • What are multi-stage reasoning systems?
    • Exploring the LangChain app.
    • Explore the LlamaIndex.
    • Explore Haystack and DSPy platforms.

5.Customize language models with Azure Databricks

    • What is tuning?
    • Customize the Azure OpenAI model.

6.Evaluate language models with Azure Databricks

    • Explore the LLM assessment.
    • Evaluation of LLM and artificial intelligence systems.
    • Evaluate LLMs using standard metrics.
    • Description of the LLM's function as an assessment judge.

7.Review the responsible AI policy for language models in Azure Databricks

    • What is responsible AI?
    • Identifying risk factors.
    • Troubleshooting
    • Use key security tools to protect AI systems.

8.Implement LLMOps operations in Azure Databricks

    • Transition from traditional MLOps to LLMOps.
    • Overview of model deployments.
    • Describe your MLflow deployment capabilities.
    • Manage models using the Unity catalog.
  • Before you begin this module, you should familiarize yourself with the basic concepts of Azure Databricks.
  • Knowledge of the basics of Python.
  • Completing the AI-900 Learning Path

Training method:

  • Lecture and product presentation (50%)
  • Exercise (50%)

Major teaching tools include PowerPoint presentations, test-environment demonstrations, hands-on lab environments, and Microsoft Learn 

  • Training: English

  • Materials: English