Skip to content
This repository was archived by the owner on May 14, 2025. It is now read-only.

Commit c8a8c92

Browse files
committed
Import initial carvel docs
- Relates #4730
1 parent 7e6b2ec commit c8a8c92

File tree

24 files changed

+1755
-0
lines changed

24 files changed

+1755
-0
lines changed

src/carvel/docs/README.adoc

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
ifdef::env-github[]
2+
:tip-caption: :bulb:
3+
:note-caption: :information_source:
4+
:important-caption: :heavy_exclamation_mark:
5+
:caution-caption: :fire:
6+
:warning-caption: :warning:
7+
endif::[]
8+
:servers: link:servers.adoc[Servers]
9+
:examples: link:examples.adoc[Examples]
10+
ifndef::env-github[]
11+
:servers: <<servers>>
12+
:examples: <<examples>>
13+
endif::[]
14+
15+
= Spring Cloud Data Flow Carvel Documentation
16+
17+
toc::[]
18+
19+
ifdef::env-github[]
20+
21+
link:configuration-options.adoc[Configuration Options]
22+
23+
link:servers.adoc[Servers]
24+
25+
link:binder.adoc[Binder]
26+
27+
link:database.adoc[Database]
28+
29+
link:examples.adoc[Examples]
30+
31+
endif::[]
32+
33+
Main objectives for a https://carvel.dev[Carvel] integration with dataflow is to provide:
34+
35+
* We want to have exactly one common way to generate _kubernetes_ resources
36+
** Can generate set of samples automatically
37+
** Can be used for all other use cases when deploying to _kubernetes_
38+
* Automatically configure whole environment based on user choices
39+
* Easy deployment without requiring existing external _binder_ or _database_
40+
** Can deploy either _rabbit_ or _kafka_ as a binder
41+
** Can deploy either _mysql_ or _postgres_ as a database
42+
** Steps away if external _database_ or _binder_ is defined
43+
* Plain k8s templating using https://carvel.dev/ytt[ytt]
44+
** Drive templating with template options
45+
** Work with _kubectl_ without need of _kapp_ or _kapp-controller_
46+
* Package management using https://carvel.dev/kapp-controller[kapp-controller]
47+
** Publish packages for dataflow versions
48+
** Drive package deployment with given package options
49+
* Integration to _tanzu_ i.e. working with _Tanzu CLI_
50+
51+
[NOTE]
52+
====
53+
While templating can deploy _binder_ and _database_ automatially it is not supported
54+
production configuration and should be taken as a simple install as a trial to get
55+
things up and running. It's highly advised to use proper deployment of a _binder_
56+
and a _database_ as automatic deployment and configuration of those are limited.
57+
====
58+
59+
== Deploy Spring Cloud Data Flow
60+
61+
There are various examples under {examples}.
62+
63+
=== Deployment flavour
64+
There are different ways to deploy _Dataflow_ into _kubernetes_ cluster:
65+
66+
* _kubectl_ with _ytt_ templating <<deployment-kubectl>>
67+
* _kapp_ with passed deployed files from _ytt_ templates <<deployment-kapp>>
68+
* _kapp-controller_ with _carvel_ package with configured options
69+
<<deployment-kapp-controller>>
70+
* _tanzu-cli_ which essentially is _kapp-controller_ but having a concepts
71+
of a _management cluster_ maintaining _worker clusters_ <<deployment-tanzu>>
72+
73+
[[deployment-kubectl]]
74+
==== Deploy via kubectl
75+
Lowest level deployment as you are essentially passing _kubernetes_ yml files
76+
generated from _ytt_ templates.
77+
78+
While _kubectl_ gives you a great flexibility to handle deployments, it also
79+
comes with a price of maintaining created resources manually. Essentially this
80+
means you need to be aware of what resources are created if those needs
81+
to be cleared or maintained in a future.
82+
83+
[[deployment-kapp]]
84+
==== Deploy via kapp
85+
Essentially much like deploying with _kubectl_ but uses _kapp_ spesific
86+
annotations to give some sense of how deployments are done on an order.
87+
88+
What is nice with deploying via _kapp_ is that it tracks what has been
89+
deployed so deleting resources from a cluster is easy. This gives you
90+
a great benefit comparing to simple _kubectl_ deployment. You still
91+
go with plain _kubernetes_ yaml files and full control over it.
92+
93+
[[deployment-kapp-controller]]
94+
==== Deploy via kapp-controller
95+
Takes use of a _carvel_ package and deploys via controller with given options.
96+
97+
With _kapp-controller_ you introduce a concept of a _carvel package_ and
98+
_package repository_ which gives even higher level of a deployment into
99+
a cluster. Essentially you no longer work with low level yaml files but
100+
work with configuration options which drives templating of a _kubernetes_
101+
resources. Furthermore when deleting something from a cluster, you're no
102+
longer deleting _kubernetes_ resources directly, you're deleting a package.
103+
104+
[[deployment-tanzu]]
105+
==== Deploy via Tanzu CLI
106+
Essentially like deploying with _kapp-controller_ where _Tanzu CLI_ gives
107+
higher level of package management. _Tanzu CLI_ imposes some limitations
108+
how it uses _carvel_ _packages_ and _package repositories_. These limitations
109+
are mostly around a fack that _CLI_ is always slighty behind integrated
110+
functionality what rest of a _carver_ framework provides.
111+
112+
[NOTE]
113+
====
114+
While _Tanzu CLI_ works with correctly configured _kubernetes_ cluster
115+
with _kapp_controller_ installed its power comes from a _management cluster_
116+
managing a _worker cluster_. If you don't want to down this route it
117+
may be easier to work with lower level deployment options mentioned above.
118+
====
119+
120+
=== Configure Servers
121+
More info to configure servers see {servers}.
122+
123+
=== Binder and Database
124+
Whether you want to use _rabbit_ or a _kafka_ as a binder and _mysql_ or
125+
a _postgres_ as a database we provided easy deployment which steps away when
126+
external services are defined.
127+
128+
==== Use Deployed Services
129+
On default _postgres_ and _rabbit_ are used as a database and a binder.

src/carvel/docs/binder.adoc

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
ifdef::env-github[]
2+
:tip-caption: :bulb:
3+
:note-caption: :information_source:
4+
:important-caption: :heavy_exclamation_mark:
5+
:caution-caption: :fire:
6+
:warning-caption: :warning:
7+
:scdf-deploy-binder-enabled: link:configuration-options.adoc#configuration-options-scdf.deploy.binder.enabled[scdf.deploy.binder.enabled]
8+
:scdf-binder-kafka-broker-host: link:configuration-options.adoc#configuration-options-scdf.binder.kafka.broker.host[scdf.binder.kafka.broker.host]
9+
:scdf-binder-kafka-broker-port: link:configuration-options.adoc#configuration-options-scdf.binder.kafka.broker.port[scdf.binder.kafka.broker.port]
10+
:scdf-binder-kafka-zk-host: link:configuration-options.adoc#configuration-options-scdf.binder.kafka.zk.host[scdf.binder.kafka.zk.host]
11+
:scdf-binder-kafka-zk-port: link:configuration-options.adoc#configuration-options-scdf.binder.kafka.zk.port[scdf.binder.kafka.zk.port]
12+
:scdf-binder-rabbit-host: link:configuration-options.adoc#configuration-options-scdf.binder.rabbit.host[scdf.binder.rabbit.host]
13+
:scdf-binder-rabbit-port: link:configuration-options.adoc#configuration-options-scdf.binder.rabbit.port[scdf.binder.rabbit.port]
14+
endif::[]
15+
ifndef::env-github[]
16+
:scdf-deploy-binder-enabled: <<configuration-options-scdf.deploy.binder.enabled>>
17+
:scdf-binder-kafka-broker-host: <<configuration-options-scdf.binder.kafka.broker.host>>
18+
:scdf-binder-kafka-broker-port: <<configuration-options-scdf.binder.kafka.broker.port>>
19+
:scdf-binder-kafka-zk-host: <<configuration-options-scdf.binder.kafka.zk.host>>
20+
:scdf-binder-kafka-zk-port: <<configuration-options-scdf.binder.kafka.zk.port>>
21+
:scdf-binder-rabbit-host: <<configuration-options-scdf.binder.rabbit.host>>
22+
:scdf-binder-rabbit-port: <<configuration-options-scdf.binder.rabbit.port>>
23+
endif::[]
24+
25+
[[binder]]
26+
== Binder
27+
On default a binder is deployed as a service for both skipper and dataflow
28+
servers. Servers are configured to use these binder services.
29+
30+
=== Configure External Kafka Binder
31+
Disable binder deployment {scdf-deploy-binder-enabled} and define a custom
32+
settings for binder under _scdf.binder_. You need to set all options {scdf-binder-kafka-broker-host},
33+
{scdf-binder-kafka-broker-port}, {scdf-binder-kafka-zk-host} and {scdf-binder-kafka-zk-port}.
34+
35+
=== Configure External Rabbit Binder
36+
Disable binder deployment {scdf-deploy-binder-enabled} and define a custom
37+
settings for binder under _scdf.binder_. You need to set all options {scdf-binder-rabbit-host}
38+
and {scdf-binder-rabbit-port}.

0 commit comments

Comments
 (0)