Getting Started#
This guide walks you through onboarding a new service to the Internal Developer Platform. By the end, your service will be registered in the catalog, building through CI/CD, deployed to Kubernetes, and connected to observability.
Time to complete: ~45 minutes for a new service.
flowchart LR
A[1. Register service] --> B[2. Add .platform.yml]
B --> C[3. Add Helm values]
C --> D[4. Enable observability]
D --> E[5. Verify]
style A fill:#4a90d9,color:#fff
style B fill:#4a90d9,color:#fff
style C fill:#4a90d9,color:#fff
style D fill:#4a90d9,color:#fff
style E fill:#2da44e,color:#fff
Prerequisites#
Before you start, make sure you have:
Access to the internal GitHub organization
kubectlconfigured for the staging cluster (kubectl config use-context staging)helm3.x installedA service repository with a
Dockerfileat the root
If you’re missing access, open a request in #platform-access.
Step 1: Register your service in the catalog#
Every service on the platform needs a YAML metadata file in the data/services/ directory of this repository.
Create a file named your-service.yaml with the following structure:
name: your-service
description: One or two sentences describing what this service does.
owner: team-yourteam
language: go # go | python | java | node | rust
repository: https://github.com/vicioussoul/your-service
tier: standard # critical | standard | internal
deployment:
type: helm
namespace: yourteam
replicas:
production: 2
staging: 1
sla:
availability: "99.9%"
rto: "30m"
rpo: "15m"
observability:
logs: true
metrics: true
tracing: false
contacts:
oncall: https://oncall.mycorp.internal/your-service
slack: "#team-yourteam"
dependencies:
- auth # list services your service calls
Choosing a tier:
Tier |
Use when |
SLA |
Incident response |
|---|---|---|---|
|
Outage directly impacts revenue or user authentication |
99.99% |
24/7, 5 min response |
|
Core product functionality |
99.9% |
Business hours |
|
Internal tooling, batch jobs |
Best effort |
Next business day |
Open a pull request. The CI pipeline validates your YAML against the schema automatically — you’ll see errors inline if the file is invalid.
Step 2: Add the platform pipeline to your repository#
Create a file .platform.yml at the root of your service repository:
# .platform.yml
platform:
version: "1"
pipeline:
language: go # matches your service YAML
test: true
lint: true
image:
registry: registry.mycorp.internal
name: your-service
deploy:
staging:
auto: true # deploy automatically on merge to main
production:
auto: false # production requires manual approval
This file is read by the platform CI template. The template handles building, testing, publishing the Docker image, and triggering the deployment. You don’t need to write your own pipeline from scratch.
See CI/CD for the full configuration reference.
Step 3: Reference the platform chart#
The platform team maintains a library of Helm charts for common workload types in a dedicated repository. Charts are published in OCI format to registry.mycorp.internal/charts and versioned independently of your service.
You don’t write or copy charts — you declare a dependency on the relevant chart and provide a values.yaml that overrides only what you need. When the platform team releases a new chart version, you adopt it by updating the dependency.
Structure in your repository:
your-service/
└── deploy/
└── helm/
├── Chart.yaml # declares the platform chart as a dependency
├── values.yaml # your overrides (production)
└── values.staging.yaml # staging-specific overrides
Chart.yaml — reference the chart from the OCI registry:
apiVersion: v2
name: your-service
version: 1.0.0
dependencies:
- name: deployment
version: ">=1.0.0"
repository: oci://registry.mycorp.internal/charts
Pull the chart before the first deploy:
helm dependency update deploy/helm
values.yaml — override the defaults that matter for your service. Everything else falls back to the chart’s built-in defaults:
deployment:
image:
repository: registry.mycorp.internal/your-service
tag: latest
port: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
env:
LOG_LEVEL: info
AUTH_SERVICE_URL: http://auth.security.svc.cluster.local
How values merge: when Helm deploys your service, your values.yaml is merged with the chart’s values.yaml. Your values always win — the chart defaults only fill in what you haven’t specified.
values.staging.yaml — override staging-specific settings without duplicating the whole file:
deployment:
replicas: 1
resources:
requests:
cpu: 50m
memory: 64Mi
The CI pipeline passes both files automatically: values.yaml first, then values.staging.yaml on top for staging deploys.
See Kubernetes for the full values reference for each chart type (Deployment, Job, CronJob, StatefulSet).
Step 4: Enable observability#
Before your first production deployment, configure your service to emit logs in JSON format and expose a /metrics endpoint.
Logging (required):
# Python example
import logging, json
logging.basicConfig(
format=json.dumps({
"time": "%(asctime)s",
"level": "%(levelname)s",
"service": "your-service",
"message": "%(message)s",
})
)
Metrics (required for critical and standard tiers):
Expose a /metrics endpoint in Prometheus format, then enable scraping in your values.yaml:
deployment:
metrics:
enabled: true # adds Prometheus annotations to the Service automatically
path: /metrics # default, omit if using /metrics
See Observability for the full setup guide.
Step 5: Verify your setup#
After merging your pull request and completing the first pipeline run, verify:
Check |
Command |
|---|---|
Service appears in catalog |
Open the Service Catalog page in this docs site |
Pod is running in staging |
|
Metrics are being scraped |
|
Logs appear in the log aggregator |
Search for |
Next steps#
Read the CI/CD guide to understand pipeline stages and how to customize them
Learn about Kubernetes deployment options including HPAs, config maps, and secrets
Set up distributed tracing if your service makes downstream calls