Trigger pipelines remotely, stream logs in real-time with SSE, and run isolated Docker containers
Run as a long-running service with HTTP API endpoints for remote pipeline execution
Monitor pipeline execution via Server-Sent Events from multiple clients simultaneously
Isolated execution environments with full Docker and custom Dockerfile support
Automatic retries with exponential backoff for handling transient failures
YAML-based configuration with validation, environment variables, and conditional execution
Run multiple jobs in parallel for faster pipeline execution
Keep Pin running as a service and trigger pipelines remotely
daemon_start - Service startedpipeline_trigger - New executionjob_container_start - Container startedlog - Real-time logsjob_completed - Job finishedpipeline_complete - Pipeline doneconst eventSource = new EventSource('http://localhost:8081/events');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log(`[${data.level}] ${data.message}`);
};
workflow:
- build
build:
image: golang:alpine
copyFiles: true
script:
- go build -o app .
pin apply -f pipeline.yaml
Or start daemon mode:
pin apply --daemon
See how Pin handles different scenarios
Run multiple services simultaneously with isolated ports
workflow:
- user-service
- auth-service
- api-gateway
user-service:
image: node:18-alpine
copyFiles: true
port:
- "127.0.0.1:3001:3000"
parallel: true
script:
- cd services/user-service
- npm install && npm start
auth-service:
image: node:18-alpine
copyFiles: true
port:
- "127.0.0.1:3002:3000"
parallel: true
script:
- cd services/auth-service
- npm install && npm start
Deploy to different environments based on branch
workflow:
- build
- deploy-staging
- deploy-production
deploy-staging:
image: alpine:latest
condition: $BRANCH == "develop"
script:
- echo "Deploying to staging..."
deploy-production:
image: alpine:latest
condition: $BRANCH == "main"
script:
- echo "Deploying to production..."
BRANCH=main pin apply -f deploy.yaml
Automatic retries with exponential backoff for flaky operations
network-operation:
image: alpine:latest
retry:
attempts: 5
delay: 1
backoff: 2.0
script:
- wget https://api.example.com/data
- cat data
Retries: 1s → 2s → 4s → 8s → 16s
Build and use your own development environment
workflow:
- setup-env
- run-dev
setup-env:
dockerfile: "./dev.Dockerfile"
copyFiles: true
script:
- echo "Environment ready"
run-dev:
image: setup-env-custom:latest
port:
- "8080:8080"
env:
- NODE_ENV=development
script:
- npm run dev
Trigger pipelines remotely and monitor in real-time
# Start daemon
pin apply --daemon
# Trigger from anywhere
curl -X POST \
-H "Content-Type: application/yaml" \
--data-binary @pipeline.yaml \
http://localhost:8081/trigger
# Watch live events
curl -N http://localhost:8081/events