Documentation
This is a placeholder page that shows you how to use this template site.
This section is where the user documentation for your project lives - all the information your users need to understand and successfully use your project.
For large documentation sets we recommend adding content under the headings in this section, though if some or all of them don’t apply to your project feel free to remove them or add your own. You can see an example of a smaller Docsy documentation site in the Docsy User Guide, which lives in the Docsy theme repo if you’d like to copy its docs section.
Other content such as marketing material, case studies, and community updates should live in the About and Community pages.
Find out how to use the Docsy theme in the Docsy User Guide. You can learn more about how to organize your documentation (and how we organized this site) in Organizing Your Content.
1 - Examples
See your project in action!
This is a placeholder page that shows you how to use this template site.
Do you have any example applications or code for your users in your repo or elsewhere? Link to your examples here.
2 - Operate
List of how-to to operate Kargo Infrastructure
This is a placeholder page that shows you how to use this template site.
2.1 - Kafka
List of how-to manage Kafka
This is a placeholder page that shows you how to use this template site.
2.1.1 - Create Kafka Topic
How to create kafka topic from github config repository
Create Kafka Topic
A Topic is a category/feed name to which records are stored and published. Each topic has unique name across the entire Kafka cluster. Topics are partitioned, meaning a topic is spread over a number of “buckets” located on different Kafka brokers.
For more detailed explanation about kafka topic and architecture please see What is Apache Kafka? from Confluent. Or you can watch Apache Kafka Fundamentals video in Youtube.
Prerequisites
- Have write access to
rhodes
repository - Understanding how topic partition and replica works
Step 1 - Create a new domain/folder to store KafkaTopic manifest
We use domain to categorize kafka Topic, each domain is represented with folder in rhodes
repository
All kafka topics should be stored in deployment/{environment}/topic/{domain}/{topic-name}.yaml
, skip this step if the folder already exists.
Register new folder as ArgoCD apps by updating the file in apps/{environment}/kafka-topic-set.application.yaml
# kafka-topic-set.application.yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
...
spec:
...
generators:
- list:
elements:
...
- domain: new-domain
# Insert environment here {dev,integration,stg,prod}
namespace: dev
branch: master
...
Then create a new domain folder
mkdir -p deployment/{environment}/topic/{domain}
# Example from rhodes root directory
mkdir -p deployment/dev/topic/new-domain
Step 2 - Create KafkaTopic manifest
Create a new KafkaTopic
manifest in the domain folder with name {topic-name}.yaml
# topic-name.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: insert-topic-name-here
labels:
strimzi.io/cluster: dev
spec:
partitions: 10
replicas: 3
topicName: topic-name-if-different-from-metadata-name
config:
cleanup.policy: compact
IMPORTANT: Please don’t forget to include metadata.labels.strimzi.io/cluster
value with the environment name you currently want to deploy
Step 3 - PR and merge manifest to master
branch
Create branch and push the changes. Request review to other engineers. Merge after approved.
Step 4 - Check Topic in kafka-ui
After the PR merged, check if your Topic already created via kafka-ui
List URL for kafka-ui
FAQ
Spec
value for KafkaTopic manifest
partitions
and replicas
value
Kafka Partition and replica explanation from Stack Overflow
We recommend to assign replicas: 3
for each topic created to ensure durability and high availability. Three replicas will ensure that the topic will be replicated into three different availibity zones in GCP.
For partitions
replica, we recommend to assign partitions: 10
(minimum) or more than that (it’s okay to overprovision partition). This value is related to throughput from consumer side. You can use Sizing Calculator for Apache Kafka to calculate number of partition you need.
topicnName
value
KafkaTopic
manifest will create topic with the same name as metadata.name
value. However, there is limitation with Kubernetes object name, it only allows alphanumeric and -
character. To create topic with other character use spec.topicName
parameter.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: topic-name-with-dot
labels:
strimzi.io/cluster: dev
spec:
topicName: topic.name.with.dot
...
config
value
Please read Confluent Topic Configurations docs to see all the topic configurations (except for confluent.*
, we don’t use confluent platform)
cleanup.policy
A string that is either “delete” or “compact” or both. This string designates the retention policy to use on old log segments. The default policy (“delete”) will discard old segments when their retention time or size limit has been reached. The “compact” setting will enable log compaction on the topic.
min.insync.replicas
When a producer sets acks to “all” (or “-1”), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of “all”. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
Examples
Create topic with special character
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: topic-name-with-dot
labels:
strimzi.io/cluster: dev
spec:
partitions: 10
replicas: 3
topicName: topic.name.with.dot
config:
cleanup.policy: compact
Create topic with 7 days retention in production
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: retention-topic
labels:
strimzi.io/cluster: prod
spec:
partitions: 10
replicas: 3
config:
cleanup.policy: delete
retention.ms: 604800000
2.2 - Teleport
List of how-to to operate Kargo Infrastructure using teleport
This is a placeholder page that shows you how to use this template site.
2.2.1 - Access Kubernetes Using teleport
Guide to access kubernetes via kubectl using teleport
Introduction
This guide will help you to access kargo Kubernetes cluster using teleport
as an authentication proxy.
Prerequisites
Step 1 - Login to teleport using GIthub
Login to the kargo’s internal teleport https://teleport.helios.kargo.tech
tsh login --proxy=teleport.helios.kargo.tech:443
Step 2 - List kubernetes cluster
Check all available kubernetes cluster in teleport
Step 3 - Login to Kubernetes cluster
Login to the desired kubernetes cluster
tsh kube login {cluster-name}
After this command executed, teleport
automatically generate kubernetes credential for the cluster and set the current context of the kubectl
Step 4 - Test the connection
Use kubectl
command to test teleport
authentication
kubectl version
# The output should look like this
## Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-04T02:28:17Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
## Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.15-gke.3400", GitCommit:"750002971a60d8a06e0a403c52724257f0f68481", GitTreeState:"clean", BuildDate:"2022-03-08T09:33:43Z", GoVersion:"go1.15.15b5", Compiler:"gc", Platform:"linux/amd64"}
# Or execute
kubectl get pod
Example
Access app-dev cluster using teleport
# Login to teleport
tsh login --proxy=helios.teleport.kargo.tech:443
# Connect to app-dev cluster
tsh kube login app-dev
# Check pod in `dev` namespace
kubectl get pod -n dev
FAQ
How can I access production cluster?
To access production cluster via teleport
you need to be invited to production-access
github team. Please contact maintainer
for the team to request access to Production cluster.
I already invited to production github team but I still can’t access the cluster
Teleport will renew certificate for authorization every 30 minutes. To renew immediately relogin to teleport.
Protip
- Use
ohmyzsh
kubectl plugin to add useful alias. You can shorten kubectl
to k
, change namespace using kcn {namespace name}
, etc.
# .zshrc
plugins=(... kubectl ...)