Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: use exec to reduce helm log #1460

Merged
merged 1 commit into from
Feb 2, 2023

Conversation

aFlyBird0
Copy link
Member

❯ minikube delete
🔥  正在删除 docker 中的“minikube”…
🔥  正在删除容器 "minikube" ...
🔥  正在移除 /Users/lhp/.minikube/machines/minikube…
💀  Removed all traces of the "minikube" cluster.
❯ go run ./cmd/devstream start
I'll prepare some tools for you.
Let's get started.

✅ Docker is ready.
😄  Darwin 13.1 (arm64) 上的 minikube v1.29.0
✨  自动选择 docker 驱动
📌  Using Docker Desktop driver with root privileges
❗  Local proxy ignored: not passing HTTP_PROXY=http://127.0.0.1:7890 to docker env.
❗  Local proxy ignored: not passing HTTPS_PROXY=http://127.0.0.1:7890 to docker env.
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=3885MB) ...
❗  Local proxy ignored: not passing HTTP_PROXY=http://127.0.0.1:7890 to docker env.
❗  Local proxy ignored: not passing HTTPS_PROXY=http://127.0.0.1:7890 to docker env.
🌐  找到的网络选项:
    ▪ http_proxy=http://127.0.0.1:7890
❗  You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
📘  Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
    ▪ https_proxy=http://127.0.0.1:7890
🐳  正在 Docker 20.10.23 中准备 Kubernetes v1.26.1…
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
✅ Minikube is ready.
✅ Helm is ready.
Install Argo CD now.: y
"argo" has been added to your repositories
NAME: argocd
LAST DEPLOYED: Thu Feb  2 18:47:00 2023
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
DEPRECATED option createAggregateRoles - Use global.rbac.aggregatedRoles

In order to access the server UI you have the following options:

1. kubectl port-forward service/argocd-server -n argocd 8080:443

    and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
      - Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
      - Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts


After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli)
✅ Argo CD is ready.

Everything is going well now.
Enjoy it!☺️

❯ go run ./cmd/devstream start
I'll prepare some tools for you.
Let's get started.

✅ Docker is ready.
✅ Minikube is ready.
✅ Helm is ready.
✅ Argo CD is ready.

Everything is going well now.
Enjoy it!☺️

Signed-off-by: Bird <aflybird0@gmail.com>
@aFlyBird0 aFlyBird0 requested a review from a team as a code owner February 2, 2023 10:48
@daniel-hutao daniel-hutao merged commit 0a6239c into devstream-io:mvp Feb 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants