Nomad: [рд╕реБрдзрд╛рд░] [рдкрд░рд┐рдЪрдп] рдореИрдХ рдкрд░ рдХреНрд▓рд╛рдЗрдВрдЯ рдЪрд▓рд╛рдирд╛

рдХреЛ рдирд┐рд░реНрдорд┐рдд 24 рдлрд╝рд░ре░ 2017  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ  ┬╖  рд╕реНрд░реЛрдд: hashicorp/nomad

рдЦрд╛рдирд╛рдмрджреЛрд╢ рд╕рдВрд╕реНрдХрд░рдг

рдШреБрдордВрддреВ v0.5.4

рдСрдкрд░реЗрдЯрд┐рдВрдЧ рд╕рд┐рд╕реНрдЯрдо рдФрд░ рдкрд░реНрдпрд╛рд╡рд░рдг рд╡рд┐рд╡рд░рдг

рдореИрдХ рдУрдПрд╕ рд╕рдВрд╕реНрдХрд░рдг 10.12.3
рдбреЙрдХрд░ рд╕рдВрд╕реНрдХрд░рдг 1.13.1

рдореБрджреНрджрд╛

рдкрд░рд┐рдЪрдп рджрд╕реНрддрд╛рд╡реЗрдЬрд╝реЛрдВ рдХреЗ рдорд╛рдзреНрдпрдо рд╕реЗ рдЬрд╛рдиреЗ рдкрд░, рдЕрдиреБрднрд╛рдЧ рдиреМрдХрд░рд┐рдпреЛрдВ рдФрд░ рдХреНрд▓рд╕реНрдЯрд░рд┐рдВрдЧ рдХреЗ рд▓рд┐рдП рдХреНрд▓рд╛рдЗрдВрдЯ 700 рдФрд░ рдЙрдкрдпреЛрдЧрдХрд░реНрддрд╛: рд╕рдореВрд╣ рд░реВрдЯ: рд░реВрдЯ рдХреЗ рд╕рд╛рде рдЕрдкрдиреА рдХрд╛рд░реНрдпрд╢реАрд▓ рдирд┐рд░реНрджреЗрд╢рд┐рдХрд╛ рддреИрдпрд╛рд░ рдХрд░реЗрдВрдЧреЗ

рдпрджрд┐ рдбреЙрдХрд░ рдХреЛ рдирд┐рд░реНрджреЗрд╢рд┐рдХрд╛ рдЦреЛрд▓рдиреЗ рдФрд░ рдЫрд╡рд┐рдпреЛрдВ рдХреЛ рдорд╛рдЙрдВрдЯ рдХрд░рдиреЗ рдХреА рдЕрдиреБрдорддрд┐ рджреЗрдиреЗ рдХреЗ рд▓рд┐рдП рдлрд╝реЛрд▓реНрдбрд░реНрд╕ /private/tmp/client1 рдФрд░ /private/tmp/client2 рдХреЗ рд▓рд┐рдП рдЕрдиреБрдорддрд┐рдпрд╛рдБ рдЕрдкрдбреЗрдЯ рдХреА рдЬрд╛рддреА рд╣реИрдВ, рддреЛ рд╕рдорд╕реНрдпрд╛рдПрдБ рджреВрд░ рд╣реЛ рдЬрд╛рддреА рд╣реИрдВред

рдкреНрд░рдЬрдирди рдХрджрдо

Mac рдкрд░ nomadproject.io рдкрд░

рдШреБрдордВрддреВ рд╕рд░реНрд╡рд░ рд▓реЙрдЧ (рдпрджрд┐ рдЙрдкрдпреБрдХреНрдд рд╣реЛ)

2017/02/24 10:36:19.651582 [DEBUG] sched: <Eval '3df17ef5-a3e6-fd20-dcb7-2497874360b1' JobID: 'example'>: allocs: (place 2) (update 0) (migrate 0) (stop 0) (ignore 0) (lost 1)
2017/02/24 10:36:19.652943 [DEBUG] worker: submitted plan for evaluation 3df17ef5-a3e6-fd20-dcb7-2497874360b1
2017/02/24 10:36:19.652967 [DEBUG] sched: <Eval '3df17ef5-a3e6-fd20-dcb7-2497874360b1' JobID: 'example'>: setting status to complete
2017/02/24 10:36:19.653709 [DEBUG] worker: updated evaluation <Eval '3df17ef5-a3e6-fd20-dcb7-2497874360b1' JobID: 'example'>
2017/02/24 10:36:19.653727 [DEBUG] worker: ack for evaluation 3df17ef5-a3e6-fd20-dcb7-2497874360b1

рдШреБрдордВрддреВ рдЧреНрд░рд╛рд╣рдХ рд▓реЙрдЧ (рдпрджрд┐ рдЙрдкрдпреБрдХреНрдд рд╣реЛ)

2017/02/24 10:36:23.143467 [INFO] driver.docker: created container eb5dfe114273177b41a4d3ef9e9b2c34d8eb53ff87eac0419193d6ea57046215
2017/02/24 10:36:23.147432 [DEBUG] driver.docker: failed to start container "eb5dfe114273177b41a4d3ef9e9b2c34d8eb53ff87eac0419193d6ea57046215" (attempt 1): API error (502): Mounts denied: { errno = [EACCES]; call = getattrlist; label = /private/tmp/client2/alloc }

2017/02/24 10:36:23.147482 [ERR] driver.docker: failed to start container eb5dfe114273177b41a4d3ef9e9b2c34d8eb53ff87eac0419193d6ea57046215: API error (502): Mounts denied: { errno = [EACCES]; call = getattrlist; label = /private/tmp/client2/alloc }

2017/02/24 10:36:23 [DEBUG] plugin: /usr/local/bin/nomad: plugin process exited
2017/02/24 10:36:23.150407 [WARN] client: failed to start task "redis" for alloc "12d6165d-055b-a9a4-9846-0f52022d8a5f": Failed to start container eb5dfe114273177b41a4d3ef9e9b2c34d8eb53ff87eac0419193d6ea57046215: API error (502): Mounts denied: { errno = [EACCES]; call = getattrlist; label = /private/tmp/client2/alloc }

2017/02/24 10:36:23.150871 [INFO] client: Not restarting task: redis for alloc: 12d6165d-055b-a9a4-9846-0f52022d8a5f 
2017/02/24 10:36:23.151412 [INFO] client: marking allocation 12d6165d-055b-a9a4-9846-0f52022d8a5f for GC
2017/02/24 10:36:23.153503 [DEBUG] driver.docker: unable to cleanup image "sha256:481995377a044d40ca3358e4203fe95eca1d58b98a1d4c2d9cec51c0c4569613": still in use
2017/02/24 10:36:23.153892 [INFO] client: marking allocation 12d6165d-055b-a9a4-9846-0f52022d8a5f for GC
2017/02/24 10:36:23.153913 [DEBUG] client: couldn't add alloc 12d6165d-055b-a9a4-9846-0f52022d8a5f for GC: alloc 12d6165d-055b-a9a4-9846-0f52022d8a5f already being tracked for GC
2017/02/24 10:36:23.401113 [DEBUG] client: updated allocations at index 95 (total 3) (pulled 0) (filtered 3)
2017/02/24 10:36:23.401190 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 3)

рдХрд╛рд░реНрдп рдлрд╝рд╛рдЗрд▓ (рдпрджрд┐ рдЙрдкрдпреБрдХреНрдд рд╣реЛ)

# There can only be a single job definition per file. This job is named
# "example" so it will create a job with the ID and Name "example".

# The "job" stanza is the top-most configuration option in the job
# specification. A job is a declarative specification of tasks that Nomad
# should run. Jobs have a globally unique name, one or many task groups, which
# are themselves collections of one or many tasks.
#
# For more information and examples on the "job" stanza, please see
# the online documentation at:
#
#     https://www.nomadproject.io/docs/job-specification/job.html
#
job "example" {
  # The "region" parameter specifies the region in which to execute the job. If
  # omitted, this inherits the default region name of "global".
  # region = "global"

  # The "datacenters" parameter specifies the list of datacenters which should
  # be considered when placing this task. This must be provided.
  datacenters = ["dc1"]

  # The "type" parameter controls the type of job, which impacts the scheduler's
  # decision on placement. This configuration is optional and defaults to
  # "service". For a full list of job types and their differences, please see
  # the online documentation.
  #
  # For more information, please see the online documentation at:
  #
  #     https://www.nomadproject.io/docs/jobspec/schedulers.html
  #
  type = "service"

  # The "constraint" stanza defines additional constraints for placing this job,
  # in addition to any resource or driver constraints. This stanza may be placed
  # at the "job", "group", or "task" level, and supports variable interpolation.
  #
  # For more information and examples on the "constraint" stanza, please see
  # the online documentation at:
  #
  #     https://www.nomadproject.io/docs/job-specification/constraint.html
  #
  # constraint {
  #   attribute = "${attr.kernel.name}"
  #   value     = "linux"
  # }

  # The "update" stanza specifies the job update strategy. The update strategy
  # is used to control things like rolling upgrades. If omitted, rolling
  # updates are disabled.
  #
  # For more information and examples on the "update" stanza, please see
  # the online documentation at:
  #
  #     https://www.nomadproject.io/docs/job-specification/update.html
  #
  update {
    # The "stagger" parameter specifies to do rolling updates of this job every
    # 10 seconds.
    stagger = "10s"

    # The "max_parallel" parameter specifies the maximum number of updates to
    # perform in parallel. In this case, this specifies to update a single task
    # at a time.
    max_parallel = 1
  }

  # The "group" stanza defines a series of tasks that should be co-located on
  # the same Nomad client. Any task within a group will be placed on the same
  # client.
  #
  # For more information and examples on the "group" stanza, please see
  # the online documentation at:
  #
  #     https://www.nomadproject.io/docs/job-specification/group.html
  #
  group "cache" {
    # The "count" parameter specifies the number of the task groups that should
    # be running under this group. This value must be non-negative and defaults
    # to 1.
    count = 3

    # The "restart" stanza configures a group's behavior on task failure. If
    # left unspecified, a default restart policy is used based on the job type.
    #
    # For more information and examples on the "restart" stanza, please see
    # the online documentation at:
    #
    #     https://www.nomadproject.io/docs/job-specification/restart.html
    #
    restart {
      # The number of attempts to run the job within the specified interval.
      attempts = 10
      interval = "5m"

      # The "delay" parameter specifies the duration to wait before restarting
      # a task after it has failed.
      delay = "25s"

     # The "mode" parameter controls what happens when a task has restarted
     # "attempts" times within the interval. "delay" mode delays the next
     # restart until the next interval. "fail" mode does not restart the task
     # if "attempts" has been hit within the interval.
      mode = "delay"
    }

    # The "ephemeral_disk" stanza instructs Nomad to utilize an ephemeral disk
    # instead of a hard disk requirement. Clients using this stanza should
    # not specify disk requirements in the resources stanza of the task. All
    # tasks in this group will share the same ephemeral disk.
    #
    # For more information and examples on the "ephemeral_disk" stanza, please
    # see the online documentation at:
    #
    #     https://www.nomadproject.io/docs/job-specification/ephemeral_disk.html
    #
    ephemeral_disk {
      # When sticky is true and the task group is updated, the scheduler
      # will prefer to place the updated allocation on the same node and
      # will migrate the data. This is useful for tasks that store data
      # that should persist across allocation updates.
      # sticky = true
      # 
      # Setting migrate to true results in the allocation directory of a
      # sticky allocation directory to be migrated.
      # migrate = true

      # The "size" parameter specifies the size in MB of shared ephemeral disk
      # between tasks in the group.
      size = 300
    }

    # The "task" stanza creates an individual unit of work, such as a Docker
    # container, web application, or batch processing.
    #
    # For more information and examples on the "task" stanza, please see
    # the online documentation at:
    #
    #     https://www.nomadproject.io/docs/job-specification/task.html
    #
    task "redis" {
      # The "driver" parameter specifies the task driver that should be used to
      # run the task.
      driver = "docker"

      # The "config" stanza specifies the driver configuration, which is passed
      # directly to the driver to start the task. The details of configurations
      # are specific to each driver, so please see specific driver
      # documentation for more information.
      config {
        image = "redis:2.8"
        port_map {
          db = 6379
        }
      }

      # The "artifact" stanza instructs Nomad to download an artifact from a
      # remote source prior to starting the task. This provides a convenient
      # mechanism for downloading configuration files or data needed to run the
      # task. It is possible to specify the "artifact" stanza multiple times to
      # download multiple artifacts.
      #
      # For more information and examples on the "artifact" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/artifact.html
      #
      # artifact {
      #   source = "http://foo.com/artifact.tar.gz"
      #   options {
      #     checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
      #   }
      # }

      # The "logs" stana instructs the Nomad client on how many log files and
      # the maximum size of those logs files to retain. Logging is enabled by
      # default, but the "logs" stanza allows for finer-grained control over
      # the log rotation and storage configuration.
      #
      # For more information and examples on the "logs" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/logs.html
      #
      # logs {
      #   max_files     = 10
      #   max_file_size = 15
      # }

      # The "resources" stanza describes the requirements a task needs to
      # execute. Resource requirements include memory, disk space, network,
      # cpu, and more. This ensures the task will execute on a machine that
      # contains enough resource capacity.
      #
      # For more information and examples on the "resources" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/resources.html
      #
      resources {
        cpu    = 500 # 500 MHz
        memory = 256 # 256MB
        network {
          mbits = 10
          port "db" {}
        }
      }

      # The "service" stanza instructs Nomad to register this task as a service
      # in the service discovery engine, which is currently Consul. This will
      # make the service addressable after Nomad has placed it on a host and
      # port.
      #
      # For more information and examples on the "service" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/service.html
      #
      service {
        name = "global-redis-check"
        tags = ["global", "cache"]
        port = "db"
        check {
          name     = "alive"
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }

      # The "template" stanza instructs Nomad to manage a template, such as
      # a configuration file or script. This template can optionally pull data
      # from Consul or Vault to populate runtime configuration data.
      #
      # For more information and examples on the "template" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/template.html
      #
      # template {
      #   data          = "---\nkey: {{ key \"service/my-key\" }}"
      #   destination   = "local/file.yml"
      #   change_mode   = "signal"
      #   change_signal = "SIGHUP"
      # }

      # The "vault" stanza instructs the Nomad client to acquire a token from
      # a HashiCorp Vault server. The Nomad servers must be configured and
      # authorized to communicate with Vault. By default, Nomad will inject
      # The token into the job via an environment variable and make the token
      # available to the "template" stanza. The Nomad client handles the renewal
      # and revocation of the Vault token.
      #
      # For more information and examples on the "vault" stanza, please see
      # the online documentation at:
      #
      #     https://www.nomadproject.io/docs/job-specification/vault.html
      #
      # vault {
      #   policies      = ["cdn", "frontend"]
      #   change_mode   = "signal"
      #   change_signal = "SIGHUP"
      # }

      # Controls the timeout between signalling a task it will be killed
      # and killing the task. If not set a default is used.
      # kill_timeout = "20s"
    }
  }
}

рд╕рдмрд╕реЗ рдЙрдкрдпреЛрдЧреА рдЯрд┐рдкреНрдкрдгреА

рдореБрдЭреЗ рдЗрд╕ рдореБрджреНрджреЗ рдХреЛ рдЦреЛрдЬрдиреЗ рдореЗрдВ рдХреБрдЫ рд╕рдордп рд▓рдЧрд╛ред рдЖрдк рдЗрд╕реЗ рдкреНрд░рд╛рд░рдВрдн рдХрд░рдиреЗ рд╡рд╛рд▓реА рдорд╛рд░реНрдЧрджрд░реНрд╢рд┐рдХрд╛ рдореЗрдВ рдФрд░ рдЕрдзрд┐рдХ рд╕реНрдкрд╖реНрдЯ рдХреИрд╕реЗ рдХрд░рддреЗ рд╣реИрдВ рдХрд┐ рдореИрдХ рдкрд░ рдШреБрдордВрддреВ рдЪрд▓рд╛рдиреЗ рдФрд░ рд▓рд┐рдирдХреНрд╕ рдмреЙрдХреНрд╕ рдкрд░ рдирд╣реАрдВ рдЪрд▓рдиреЗ рд╕реЗ рд╕рдорд╕реНрдпрд╛рдПрдВ рдЙрддреНрдкрдиреНрди рд╣реЛрдВрдЧреАред

рдЬреИрд╕рд╛ рдХрд┐ рдпрд╣ рдЦрдбрд╝рд╛ рд╣реИ рдпрд╣ рдШреБрдордВрддреВ рдХреА рдирдХрд╛рд░рд╛рддреНрдордХ рдкрд╣рд▓реА рдЫрд╛рдк рдХреЛ рдЬрдиреНрдо рджреЗрддрд╛ рд╣реИред

рд╕рднреА 3 рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ

рдЕрд░реЗ,

рджреБрд░реНрднрд╛рдЧреНрдп рд╕реЗ рдЖрд░рдВрдн рдХрд░рдирд╛ рдПрдХ рд▓рд┐рдирдХреНрд╕ рд╡рд╛рддрд╛рд╡рд░рдг рдХреЛ рд▓рдХреНрд╖рд┐рдд рдХрд░ рд░рд╣рд╛ рд╣реИ рдЬрд┐рд╕рдХрд╛ рдЙрджреНрджреЗрд╢реНрдп рдкреНрд░рджрд╛рди рдХреА рдЧрдИ рд╡реИрдЧреНрд░рд╛рдВрдЯ рдЫрд╡рд┐ рдкрд░ рдХрд┐рдпрд╛ рдЬрд╛рдирд╛ рд╣реИред рдирд┐рд░реНрджреЗрд╢рд┐рдХрд╛ рдЙрди рд╡рд┐рд╢реЗрд╖рд╛рдзрд┐рдХрд╛рд░реЛрдВ рдХреЗ рд╕рд╛рде рд╕реЗрдЯрдЕрдк рд╣реИ рдЬреИрд╕реЗ рдХрд┐ рдпрд╣ рд╕рдВрднрд╛рд╡рд┐рдд рд░реВрдк рд╕реЗ рд╕рдВрд╡реЗрджрдирд╢реАрд▓ рдПрдкреНрд▓рд┐рдХреЗрд╢рди рдбреЗрдЯрд╛ рдХреА рджреГрд╢реНрдпрддрд╛ рдХреЛ рд╕реАрдорд┐рдд рдХрд░рддреА рд╣реИ рдЗрд╕рд▓рд┐рдП рд╣рдо рдбрд┐рдлрд╝реЙрд▓реНрдЯ рдХреЛ рдирд╣реАрдВ рдмрджрд▓реЗрдВрдЧреЗред

рдЬреИрд╕рд╛ рдХрд┐ рдЖрдкрдиреЗ рдкрд╛рдпрд╛, рдбреЙрдХрд░ рдбреНрд░рд╛рдЗрд╡рд░ рдХреЛ рдХрд╛рдо рдХрд░рдиреЗ рдХреЗ рд▓рд┐рдП рдмрдирд╛рдпрд╛ рдЬрд╛ рд╕рдХрддрд╛ рд╣реИ рдпрджрд┐ рдирд┐рд░реНрджреЗрд╢рд┐рдХрд╛ рд╕рд╣реА рдЕрдиреБрдорддрд┐рдпреЛрдВ рдХреЗ рд╕рд╛рде рдмрдирд╛рдИ рдЧрдИ рд╣реИ рдЬреЛ рдСрдкрд░реЗрдЯрд░ рджреНрд╡рд╛рд░рд╛ рдХреА рдЬрд╛ рд╕рдХрддреА рд╣реИред

рдзрдиреНрдпрд╡рд╛рдж,
рдПрд▓реЗрдХреНрд╕

рдореБрдЭреЗ рдЗрд╕ рдореБрджреНрджреЗ рдХреЛ рдЦреЛрдЬрдиреЗ рдореЗрдВ рдХреБрдЫ рд╕рдордп рд▓рдЧрд╛ред рдЖрдк рдЗрд╕реЗ рдкреНрд░рд╛рд░рдВрдн рдХрд░рдиреЗ рд╡рд╛рд▓реА рдорд╛рд░реНрдЧрджрд░реНрд╢рд┐рдХрд╛ рдореЗрдВ рдФрд░ рдЕрдзрд┐рдХ рд╕реНрдкрд╖реНрдЯ рдХреИрд╕реЗ рдХрд░рддреЗ рд╣реИрдВ рдХрд┐ рдореИрдХ рдкрд░ рдШреБрдордВрддреВ рдЪрд▓рд╛рдиреЗ рдФрд░ рд▓рд┐рдирдХреНрд╕ рдмреЙрдХреНрд╕ рдкрд░ рдирд╣реАрдВ рдЪрд▓рдиреЗ рд╕реЗ рд╕рдорд╕реНрдпрд╛рдПрдВ рдЙрддреНрдкрдиреНрди рд╣реЛрдВрдЧреАред

рдЬреИрд╕рд╛ рдХрд┐ рдпрд╣ рдЦрдбрд╝рд╛ рд╣реИ рдпрд╣ рдШреБрдордВрддреВ рдХреА рдирдХрд╛рд░рд╛рддреНрдордХ рдкрд╣рд▓реА рдЫрд╛рдк рдХреЛ рдЬрдиреНрдо рджреЗрддрд╛ рд╣реИред

~рдПрдЬреЗрдВрдЯреЛрдВ рдХреЛ рдЕрдм root рд░реВрдк рдореЗрдВ рдЪрд▓рд╛рдиреЗ рдХреА рдЖрд╡рд╢реНрдпрдХрддрд╛ рдирд╣реАрдВ рд╣реИред рдХреЗрд╡рд▓ sudo рдмрд┐рдирд╛ рдХреНрд▓рд╛рдЗрдВрдЯ рдЪрд▓рд╛рдПрдВ рдФрд░ рдЗрд╕реЗ рдХрд╛рдо рдХрд░рдирд╛ рдЪрд╛рд╣рд┐рдПред рджрд╕реНрддрд╛рд╡реЗрдЬрд╝ ASAP рдЕрдкрдбреЗрдЯ рдХрд┐рдП рдЬрд╛рдПрдВрдЧреЗред~

рд╡рд╛рд╕реНрддрд╡ рдореЗрдВ рдмрдирд╛рдиреЗ data_dir рдХреЗ рд░реВрдк рдореЗрдВ рдЧреНрд░рд╛рд╣рдХреЛрдВ рдХреЛ рдЪрд▓рд╛рдиреЗ рд╕реЗ рдкрд╣рд▓реЗ рд╕реНрдерд╛рдиреАрдп рдЙрдкрдпреЛрдЧрдХрд░реНрддрд╛ рдХреЗ рд░реВрдк рдореЗрдВ root рд╕рд╣реА рджреГрд╖реНрдЯрд┐рдХреЛрдг рд╣реИред рдЗрд╕реЗ рджрд░реНрд╢рд╛рдиреЗ рдХреЗ рд▓рд┐рдП рдбреЙрдХреНрд╕ рдХреЛ рдЕрдкрдбреЗрдЯ рдХрд┐рдпрд╛ рдЧрдпрд╛ рд╣реИред

рдХреНрдпрд╛ рдпрд╣ рдкреГрд╖реНрда рдЙрдкрдпреЛрдЧреА рдерд╛?
0 / 5 - 0 рд░реЗрдЯрд┐рдВрдЧреНрд╕

рд╕рдВрдмрдВрдзрд┐рдд рдореБрджреНрджреЛрдВ

Smuerdt picture Smuerdt  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ

jrasell picture jrasell  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ

jippi picture jippi  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ

funkytaco picture funkytaco  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ

hamann picture hamann  ┬╖  3рдЯрд┐рдкреНрдкрдгрд┐рдпрд╛рдБ