Formularz kontaktowy

Close

    Szkolenie

    Osoba zgłaszająca


    EX260 Red Hat Certified Specialist in Ceph Cloud Storage exam

    EX260

    Opis

    The Red Hat Certified Specialist in Ceph Cloud Storage exam (EX260) tests the knowledge, skills, and ability to install, configure, and manage Red Hat® Ceph Storage clusters. This exam is based on Ceph 5.0.

    Cel

    By passing this exam, you become a Red Hat Certified Specialist in Ceph Cloud Storage that also counts towards earning a Red Hat Certified Architect (RHCA®).

    Grupa docelowa

    These audiences may be interested in becoming a Red Hat Certified Specialist in Ceph Storage Administration:

    • Red Hat Certified Engineers who wish to pursue Red Hat Certified Architect (RHCA)
    • System administrators who want to demonstrate the ability to configure Red Hat Ceph Storage clusters
    • Cloud administrators who need to configure Red Hat Ceph Storage for Red Hat OpenShift Container Platform or Red Hat OpenStack Platform

    Wymagania

    Konspekt

    Study points for the exam

    To help you prepare, these exam objectives highlight the task areas you can expect to see covered in the exam. Red Hat reserves the right to add, modify, and remove exam objectives. Such changes will be made public in advance.

    You should be able to perform these tasks:

    1. Install Red Hat Ceph Storage server
      • Install a containerized Red Hat Ceph Storage server on both physical and virtual systems
      • Utilize and modifyRed Hat Ansible Automation Platform installation files provided with Red Hat Ceph Storage to configure and install Red Hat Ceph Storage server
    2. Work with existing Red Hat Ceph Storage server appliances
      • Be able to change a Red Hat Ceph Storage server configuration
      • Add monitor (MON) nodes and object storage device (OSD) nodes
    3. Configure Red Hat Ceph Storage server
      • Configure a replicated storage pool
      • Store objects in storage pool
      • Store objects within a namespace within a storage pool
      • Create and configure erasure-coded pools
      • Create an erasure-coded pool profile with specified parameters
      • Upload a file to an erasure-coded pool
      • Change default settings in the Red Hat Ceph Storage configuration files
      • Manage Red Hat Ceph Storage authentication
      • Create a Red Hat Ceph Storage client with restricted read or write access to MONs, OSDs, pools, and namespaces
      • Managing OSDs Using Ceph-volume
      • Configure placement group auto-scaling
    4. Provide block storage with RBD
      • Create a RADOS block device image
      • Obtain information about a RADOS block device image
      • Map a RADOS block device image on a server
      • Use a RADOS block device image
      • Create an RBD snapshot
      • Create an RBD clone
      • Configure RBD mirrors
      • Deploy a RBD mirror agent
      • Configure one-way RBD mirroring in pool mode
      • Configure one-way RBD mirroring in image mode
      • Check the status of the mirroring process
      • Import and export RBD images
      • Export a RADOS block device to an image file
      • Create an incremental RBD image file
      • Import a full RBD image file
      • Import a full RBD image file updated with an incremental RBD image file
    5. Provide object storage with RADOSGW
      • Deploy a RADOS gateway
      • Deploy a multisite RADOS gateway
      • Provide object storage using the Amazon S3 API
      • Be able to create a RADOSGW user that will use the S3 client commands
      • Be able to upload and download objects to a RADOSGW using the S3 client commands
      • Export S3 objects using NFS
      • Provide object storage for Swift
      • Be able to create a RADOSGW user that will use the Swift interface
      • Be able to upload or download objects to a RADOSGW using Swift commands
      • Configure Ceph Object Gateway for In-Transit Encryption
    6. Provide file storage with CephFS
      • Create a Red Hat Ceph Storage file system
      • Mount a Red Hat Ceph Storage file system on a client node persistently
      • Configure CephFS quotas
      • Create a CephFS snapshot
    7. Configure a CRUSH map
      • Be able to create a bucket hierarchy in a CRUSH map that can be used in an erasure profile or a replicant rule
      • Be able to remap a PG
      • Be able to remap all PG’s in a pool for an optimal redistribution
    8. Manage and update cluster maps
      • Manage MON and OSD maps
      • Be able to monitor and change OSD storage limits for monitoring available space on an OSD
    9. Manage a Red Hat Ceph Storage cluster
      • Determine the general status of a Red Hat Ceph Storage cluster
      • Troubleshoot problems with OSDs and MONs
    10. Tune Red Hat Ceph Storage
      • Specify and tune key network tuning parameters for a Red Hat Ceph Storage cluster
      • Control and manage scrubbing and deep scrubbing
      • Control and manage recovery and rebalancing processes
      • Control and manage RAM utilization against I/O performance
    11. Troubleshoot Red Hat Ceph Storage server problems
      • Troubleshoot client issues
      • Enable debugging mode on RADOS gateway
      • Optimize RBD client access using key tuning parameters
    12. Integrate Red Hat Ceph Storage with Red Hat OpenStack Platform
      • Integrate Red Hat Ceph Storage using both Glance and Cinder
      • Modify key Glance configuration files to use Red Hat Ceph Storage
      • Configure Glance to use Red Hat Ceph Storage as a backend to store images in the Red Hat Ceph Storage cluster
      • Modify key Cinder configuration files to use Red Hat Ceph Storage
      • Configure Cinder to use Red Hat Ceph Storage RBDs for block storage backing volumes

    As with all Red Hat performance-based exams, configurations must persist after reboot without intervention.

    Red Hat reserves the right to add, modify, and remove objectives. Such changes will be made public in advance through revisions to this document.

    Uwagi

    Duration: 3.00 hours

    Przyjmujemy wpłaty w PLN lub EURO.
    W sprawie terminów jak i innych pytań prosimy o kontakt na osec@osec.pl

    For more details, please contact us at osec@osec.pl

    Note: The course outline is subject to change as technology advances and the underlying job evolves. For questions or confirmation on a specific objective or topic, please contact us at osec@osec.pl
    Cena netto:1977 PLN(450 EUR)Cena brutto:2431.71 PLNOpis

    Kurs przyjęty do powyższej kalkulacji 1 EUR = 4.3924 PLN – tabela nr. 214/C/NBP/2024, z dnia 2024-10-31. Obowiązująca od: 2024-11-04. Cena w PLN jest orientacyjna (wyliczana z EUR/USD wg kursu sprzedaży NBP z dnia wystawienia faktury). Przyjmujemy wpłaty w PLN lub EURO.

    Uwaga

    Oferujemy szkolenia wirtualne, self-paced oraz stacjonarne (w Warszawie i w lokalizacjach klienta).
    W celu ustalenia szczegółów prosimy o kontakt na osec@osec.pl

     

     

    Opis:

      – Termin gwarantowany (GTR)

    Terminy