Cp-ansible: Document usage of different ip/hostname for listener

Created on 9 Jun 2020  ·  12Comments  ·  Source: confluentinc/cp-ansible

When setting up a listener one can change the ip/hostanme for that.
The code of this roles is able to do so, however one could add that to the example hosts file.

I can create a PR if this is a desirable addition :)
e.g.

```
broker:
name: BROKER
port: 9091
ssl_enabled: false
ssl_mutual_auth_enabled: false
sasl_protocol: none
internal:
name: INTERNAL
port: 9092
hostname: ip-172-31-18-160.us-west-2.compute.internal:19091
ssl_enabled: true
ssl_mutual_auth_enabled: false
sasl_protocol: scram

enhancement question

All 12 comments

ya... this features is super helpful when working with aws, what i usually do is this:

kafka_broker:
  vars:
    kafka_broker_custom_listeners:
      external:
        name: EXTERNAL
        port: 9093
  hosts:
    ip-172-31-43-14.us-west-2.compute.internal:
      ansible_ssh_host: ec2-34-209-19-19.us-west-2.compute.amazonaws.com
      kafka_broker_custom_listeners:
        external:
          hostname: ec2-34-209-19-19.us-west-2.compute.amazonaws.com

And rely on hash merging to merge the kafka_broker_custom_listeners dict... unfortunately that seems really confusing to document in the hosts_example.yml file in my opinion.

Have you read the docs here:
https://docs.confluent.io/current/installation/cp-ansible/index.html

It does not look like we have custom listeners documented there yet, but that seems like a better place because you can include multiple samples/descriptions

hay - yeah fair point. Probably that would be better off in the other docs.

Another thing I was wondering - when I use two different interfaces of the same host for different listeners and want SSL for both:

I either need two SSL certs matching both hostnames (something this roles currently won't do) or I switch off SSL hostname checking (and use IPs) or SSL all together. How would you use this repo for such a scenario?

I will add the listeners stuff to our docs for the next release, working on an edit right now!

great question! so there are 3 ways to put keystores on the hosts:

  1. pass your own certs
  2. pass your own keystores
  3. have ansible do it for you

For 1 and 2 you can pass certs with multiple SANs extensions.

For number 3, I actually added a little feature that passes a list of hostnames to the auto generated certs:
https://github.com/confluentinc/cp-ansible/blob/5.5.0-post/roles/confluent.kafka_broker/tasks/main.yml#L39

and then those hostnames get put into a SANs extension:
https://github.com/confluentinc/cp-ansible/blob/5.5.0-post/roles/confluent.ssl/tasks/self_signed_certs.yml#L36

heres the cert_extension filter (in retrospect, the join filter would have worked here):
https://github.com/confluentinc/cp-ansible/blob/5.5.0-post/filter_plugins/filters.py#L56

But its important to note right now cp-ansible cannot handle multiple keystores.

Awesome - looking forward to the updated docs!
And thanks for the detailed answer - your self signed code is very nice!
Are you in principle interested in having a multiple-keystore functionality?
Should not be too hard to implement, since you are anyways setting SSL independently for each listener.
Or would you rather rely on the user to define SAN's for his own certs?

Ya the self signed stuff is nifty, but I wonder if people really use if beyond demoing

I prefer the single keystore/truststore approach, which i realize technically there could be one per hostname.

It gets convoluted because inside one keystore you could have:

  • a cert with multiple hostnames in the SANs
  • or multiple certs each with hostnames in their DNAME
    Because of this I think its best to just do one keystore

What are your thoughts?

ha - now that you are bringing up the convolution :thinking:
I was actually up for multiple keystores. but maybe the complexity is really something that would rather upvote SANs and one keystore. But this is only my spontaneous answer - I will give this a bit more thought

I saw today a very nice solution in the "wild":

Deploying an /etc/hosts entry on the kafka-brokers for the internal device with the same hostname as the external device.

If a kafka-request is then coming from "inside" it is using the etc/hosts entry
and targets the "internal" listener however using the external hostname and thus the certificate resolution is working.
I am wondering whether this could be a legitimate general solution :thinking:

@Fobhep this is something that I use in production since years. This is practically mandatory when you want to setup SASL in a multihome environment.

@jrevillard thanks for the feedback.
maybe we should make those playbooks do that then as well

@Fobhep Not sure we should implement this, we try to minimize modifying the OS unless it has a direct impact on Kafka specifically (open file limits for example). I think it may make sense to document this as a potential solution for multi-homed environments, however modifying the hosts file on a user's behalf seems risky and not to have enough value. Happy to be proven wrong on this last point however.

@JumaX fair point - documenting is probably a good idea.
Implementing it would go too far - I agree.
Imho we can close this then :)

@Fobhep the hosts_example.yml has now been updated to show how you can map ips/hostnames, in the 6.0 release. Closing this out as resolved.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chuck-confluent picture chuck-confluent  ·  5Comments

OneCricketeer picture OneCricketeer  ·  7Comments

Fobhep picture Fobhep  ·  7Comments

Fobhep picture Fobhep  ·  12Comments

sandeeprapido picture sandeeprapido  ·  9Comments